When is a tool the right tool for the job?
This week I spent time with a group of leaders in research software engineering at a workshop on the implications of using LLMs in their work. I gave a short lightning talk and shared a position paper representing my current thinking on AI assisted coding.
My position
Here's the position paper I shared prior to the workshop:
I specialise in creating learning experiences to help people acquire developer skills. I’ve mostly worked in software industry settings, such as developer community platforms that enable people to make web apps. My perspective on AI-assisted coding is very mixed. On the one hand I can’t deny the enabling potential of a technology that helps people build software solutions without engineering skills, and on the other I’m concerned about a loss of understanding around software implementation, together with a decreasing investment in those skills.
I’m not an expert in research engineering at all, so my perspective is entirely based on what I’m seeing in industry. The panic around jobs potentially being made obsolete is making it incredibly difficult for us to take the time necessary to explore the true potential and limitations of this new technology. There is undeniable utility here, but the costs for adoption are still largely unknown, in terms of individual and organisational success over the long term.
What preoccupies me most as an educator is the “comprehension debt” or “cognitive debt” that appears to accompany AI-assisted coding. We’re seeing a widening gap between entry level and experienced software engineers. It’s not clear how we’ll create paths for those early in their careers to acquire the skills experienced developers are using to get the most effective use from LLMs, and that they acquired the “old fashioned” way, by writing code.
There’s a parallel gap at the systems level, as we lose the ability to build solid mental models of the systems we’re using LLMs to generate. In software worked on collaboratively by teams, this is a more significant problem, as the source code exists, not just to provide instructions for a computer, but also acts as a shared human-readable representation of the system, a place where we come together to reason about how it works – the code performs a function natural language is not suited to.
When it comes to comprehension, we’ll need to learn how to decide when LLMs are the right tool for the job, and when they aren’t. This will be determined by what level of fidelity about a system is the optimal one to work at, in order to meet individual and shared goals – balancing those over the short and long term is going to be both challenging and essential.
The workshop
I have to say it was a luxury to be in a room with people who are thinking deeply about AI code generation and how we can prepare for its impact. I've found very few spaces where I can have these conversations and begin to grapple with the detail of this new automation, both in terms of opportunities and risks of harm.
It was also fanstastic to be able to sit at a table of educators who are on the front line teaching people the skills they believe will serve their individual and shared goals – that is not an easy task, especially right now!
The tension I keep coming back to is that language models could enable programming learning, but the platform interaction patterns are not optimised for it – if anything they're designed to avoid learning, as they're funded on the belief that AI removes the need for human beings to do certain categories of software work. I do believe this will change, as incidents continue to rise, organisations will discover the need for their teams to understand codebases with a new urgency.
I learned a ton this week, I have a ludicrous number of tabs open in my browser for reading lol. I left the experience more motivated than ever to experiment with ways to enable learning in this new way of making software.
- ← Previous
Do your fingers remember how to code?
