Vibe engineering
simonwillison.net649 points by janpio 6 days ago
649 points by janpio 6 days ago
I just feel so discouraged reading this somehow. I used to have this hard-to-get, in-demand skill that paid lots of money and felt like even though programming languages, libraries and web frameworks were always evolving I could always keep up because I'm smart. But now with these people like Simon Willison writing about the new way of coding with these agents and multiple streams of work going on at a time and it sounding like this is the future, I just feel discouraged because it sounds like so much work and I've tried using coding agents and they help a bit, but I find it way less fun to be waiting around for agents to do stuff and it's way harder to get into flow state managing multiple of these things. It makes me want to move into something completely different like sales
I'm really sorry to hear this, because part of my goal here is to help push back against the idea that "programming skills are useless now, anyone can get an LLM to write code for them".
I think existing software development skills get a whole lot more valuable with the addition of coding agents. You can take everything you've learned up to this point and accelerate the impact you can have with this new family of tools.
I said a version of this in the post:
> AI tools amplify existing expertise. The more skills and experience you have as a software engineer the faster and better the results you can get from working with LLMs and coding agents.
A brand new vibe coder may be able to get a cool UI out of ChatGPT, but they're not going to be able to rig up a set of automated tests with continuous integration and continuous deployment to a Kubernetes cluster somewhere. They're also not going to be able to direct three different agents at once in different areas of a large project that they've designed the architecture for.
I'm not sure that having the patience to work with something with a very inconsistent performance and that frequently lies is an extension of existing development skills. It doesn't work like tools developers use and it doesn't work like people developers work with. Furthermore, techniques of working with agents today may be completely outdated a year from now. The acceleration is also inconsistent: sometimes there's an acceleration, sometimes a deceleration.
Generative AI is at the same time incredibly impressive and completely unreliable. This makes it interesting, but also very uncertain. Maybe it's worth my investment to learn how to master today's agents, and maybe I'd be better off waiting until these things become better.
You wrote:
> Getting good results out of a coding agent feels uncomfortably close to getting good results out of a human collaborator. You need to provide clear instructions, ensure they have the necessary context and provide actionable feedback on what they produce.
That is true (about people) but misses out the most important thing for me: it's not about the information I give them, but about the information they give me. For good results, regardless of their skill level, I need to absolutely trust that they tell me what challenges they've run into and what new knowledge they've gained that I may have missed in my own understanding of the problem. If that doesn't happen, I won't get good results. If that kind of communication only reliably happens through code I have to read, it becomes inefficient. If I can't trust an agent to tell me what I need to know (and what I trust when working with people) then the whole experience breaks down.
> I'm not sure that having the patience to work with something with a very inconsistent performance and that frequently lies is an extension of existing development skills.
If you’ve be been tasked with leadership of an engineering effort involving multiple engineers and stakeholders you know that this is in fact a crucial part of the role the more senior you get. It is much the same with people: know their limitations, show them a path to success, help them overcome their limitations by laying down the right abstractions and giving them the right coaching, make it easier to do the right thing. Most of the same approaches apply. When we do these things with people it’s called leadership or management. With agents, it’s context engineering.
Because I reached that position 15 years ago, I can tell you that this is untrue (in the sense that the experience is completely different from an LLM).
Training is one thing, but training doesn't increase the productivity of the trainer; it's meant to improve the capability of the trainee.
At any level of capability, though - whether we're talking about an intern after one year of university or a senior developer with 20 years of experience - effective management requires that you're able to trust that the person tells you when they've hit a snag or anything else you may need to know. We may not be talking 100% of trust, but not too far from that, either. You can't continue working with someone that doesn't tell you what you need to know even 10% of the time, regardless of their level. LLMs are not at that acceptable level yet, so the experience is not similar to technical leadership.
If you've ever been tasked with leading one or more significant projects you'd know that if you feel you have to review every line of code anyone on the team writes, at every step of the process, that's not the path to success (if you did that, not only would progress be slow, but your team wouldn't like you very much). Code review is a very important part of the process, but it's not an efficient mechanism for day-to-day communication.
> effective management requires that you're able to trust that the person tells you when they've hit a snag or anything else you may need to know
Nope, effective management is on YOU, not them. If everyone you’re managing is completely transparent and immediately tells you stuff, you’re playing in easy mode
So the role of a coding agent is to challenge me to play in hard mode?
And suppose getting developers to not lie or hide important information is on me, what should I do to get an LLM to not do that?
no, the point is LLMs will behave the same way humans you have to manage do (there's obviously differences - eg LLMs tend to forget context more often than most humans, but also they tend to know a lot more than the average human). So some of the same skills that'll help you manage humans will also help you get more consistency out of LLMs.
I don't know of anyone who would like to work with someone who lies to them over and over, and will never stop. LLMs do certain things better than people, but my point is that there's nothing you can trust them to do. That's fine for research (we don't trust, and don't need to trust, any human or tool to do a fully exhaustive research, anyway), but not for most other work tasks. That's not to say that LLMs can't be utilised usefully, but something that can never be trusted behaves like neither person nor tool.