'It is a better programmer than me': The reality of being laid off due to AI
independent.co.uk24 points by roboboffin 21 hours ago
24 points by roboboffin 21 hours ago
I'm sure some people can be replaced by AI. I'm also sure a lot of these stories are just marketing for their crappy GPT wrapper
I think workplaces will have to allow people time to adapt. So that if your particular skill set is replaced by AI, you have the ability to retrain to a part that isn’t.
Ultimately, large part of many jobs are repetitive, and can be replaced by pattern matching. The other side, creating new patterns, is hard and takes time. So, employers will have to take this into account. They may be long periods of “unproductive” time, or more risky evaluation to try new ideas.
Who's doing all the prompting for people being laid off though? How does that transition look like?
Does the middle manager which before bugged people to do the work now write a prompt and commit code and file documents themselves?
I’m not saying people will be laid off, although this is what the article is about. So, I think people will still be prompting, but if you can prompt an agent and it can happily code away, what are you supposed to be doing ? Watching it do its work ? The only option is that you will have to generate ideas of new work constantly to drive value. This is something that generally happens over time now, but as implementation becomes quicker; idea generation will have to accelerate.
> So, I think people will still be prompting, but if you can prompt an agent and it can happily code away, what are you supposed to be doing ? Watching it do its work ?
Well, what do managers do once they prompt junior developers? ;)
Also, "tell a prompt and wait for it to finish without intervention" is not something that happens even with magical Claud Code.
I'd really like to see some actual theory (and position, people) surfaced that can be laid off due to AI and who and how then actually runs the LLMs in the company after layoffs. I've never been in a company where new work wouldn't fill up the available developer capacity, so I'm really interested in how the new world would look like.
I just had a thought. It used to be that complex C++ systems used to take so long to compile that developers used to go and have a coffee etc. This was before distributed compiling.
Maybe it will return to that, the job will have a lot of waiting around, and “free” time.
> Also, "tell a prompt and wait for it to finish without intervention" is not something that happens even with magical Claud Code.
That is how you interact with OpenAI's Codex though.
I agree with you but there's something ironic about seeing that comment here, especially considering how many jobs tech has replaced in the last few decades without people having the time to retrain.
It’s the responsibility of individuals to continue learning. Choosing, and to be clear, it is a choice, to stop learning can have dire consequences.
We are now a few years into LLMs being widely available/used, and if someone’s chosen to stick their head in the sand and ignore what’s happening around them, then that’s on them.
> I think workplaces will have to allow people time to adapt.
This feels like a very outdated view to me. Maybe we are worse off for that being the case but by and large that will not happen. The people who take initiative and learn will advance, while the people who refused to learn anything new or change how they’ve been doing the job for XX years will be pushed out.
> It’s the responsibility of individuals to continue learning
Using AI is the opposite of learning.
I'm not just trying to be snarky and dismissive, either
That's the whole selling point of AI tools. "You can do this without learning it, because the AI knows how"
> That's the whole selling point of AI tools. "You can do this without learning it, because the AI knows how"
I'm sure we are veering into "No true Scotsman" territory but that's not the type of learning/tools I'm suggesting. "Vibe Coding" is a scourge for anything more than a one-off POC but LLMs themselves are very helpful in pinpointing errors, writing common blocks of code (Copilot auto-complete style), and even things like Aider/Claude Code can be used in a good way if and only if you are reviewing _all_ the code it generates.
As soon as you disconnect yourself from the code it's game over. If you find yourself saying "Well it does what I want, commit/ship it" then you're doing it wrong.
On the other hand, there are some people who refuse to use LLMs for a wide range of reasons ranging from silly to absurd. Those people will be passed by and have no one to blame but themselves. LLMs are simply another tool in the tool box.
I am not a horse cart driver, I am a transportation expert. If the means of transport changes/advances then so will I. I will not get bogged down in "I've been driving horses for XX years and that's what I want do till the day I die", that's just silly. You have to change with the times.
> As soon as you disconnect yourself from the code it's game over
We agree on this
The only difference is that I view using LLM generated code as already a significant disconnect from the code, and you seem to think some LLM usage is possible without disconnecting from the code
Maybe you're right but I have been trying to use them this way and so far I find it makes me completely detached from what I'm building
> The only difference is that I view using LLM generated code as already a significant disconnect from the code, and you seem to think some LLM usage is possible without disconnecting from the code
It's a gray area for sure and almost no one online is talking about the same thing when they say "LLM Tools", "LLM", "Vibe Coding", "AI", etc so it makes it even harder to have conversations. It's probably a lot like the joke "Have you ever noticed that anybody driving slower than you is an idiot, and anyone going faster than you is a maniac?".
For myself, I'm fine with Github Copilot auto-completions (up to ~10 lines max) and I review every line it wrote. Most often I enjoy it for boilerplate-ish things where an abstraction would be premature but I still have to type out a bunch of boilerplate. Being able to write 1-2 examples and have it extrapolate the rest is quite nice.
I've used Aider/Claude Code [0] as well and had success but I don't love the workflow of asking it to do something, then waiting for it to spit out a bunch of code I need to review. I expect this will improve and I have seen some improvement already. For some tasks it has me beat (speed of writing UI) but most logic-type things I have been unable to prompt it well enough or give it enough/the right context to solve the problem. Because of this I mainly use these tools for one-off, POC, or just screwing around.
I also find things like explanation of errors or tracking down what the root cause of an error are useful.
I am very much _not_ a fan of "Vibe Coding" or anything that pretends it can be "no code"/"low code". I don't know if I'll ever be comfortable not reviewing the code directly but we will see. I'm sure assembly developers swore to never use C, who then swore to never use C++, who swore they'd never use python, and so on and so forth. It's not clear to me if LLM-generated code is another step up or just a tool for the current level, I'm leaning heavily towards them just being a tool. I don't think "prompt engineer" is going to be a thing.
[0] And Continue.dev, Cursor, Windsurf, Codeium, Tabnine, Junie, Jetbrains AI, and more
> For myself, I'm fine with Github Copilot auto-completions (up to ~10 lines max) and I review every line it wrote
This is what I would like to use it for, but I have been struggling quite a bit with it
If I have a rough idea of what a 10-line function might look like and Cursor does the Autocomplete suggestion, it is nice when it is basically what I had in mind and I can just accept the suggestion. This happens very rarely for me though
More often I find the suggestion is just wrong enough that I want to change it, so I don't accept it. But this also shoves the idea I had in my head right out of my brain and now I'm in a worse position, having to reconstruct the idea I already had
This happened to me enough that I wound up entirely turning off these suggestions. It was ruining my ability to achieve any kind of flow
> Because of this I mainly use these tools for one-off, POC, or just screwing around.
Yeah... My company is making these tools mandatory and I suspect they are collecting metrics to see who is using them and how much
It's been very stressful and overall an extremely negative experience for me, which is made worse when I read the constant cheerleading online, and the "You're just using it wrong" criticisms of my negative experience
> Yeah... My company is making these tools mandatory and I suspect they are collecting metrics to see who is using them and how much
I'm sorry to hear this. I have encouraged the developers I manage to try out the tools but we're no where close to "forcing" anyone to use them. It hasn't come up yet but I'll be pushing back hard on any code that is clearly LLM-generated, especially if the developer who "wrote" it can't explain what's happening. Understanding and _owning_ the code the LLMs generate is part of it, "ChatGPT said..." or "Cursor wrote..." are not valid sentence starters to question like "Why did you do it this way?". LLM-washing (or whatever you want to call it) will not be tolerated, if you commit it, you are responsible for it.
> It's been very stressful and overall an extremely negative experience for me, which is made worse when I read the constant cheerleading online, and the "You're just using it wrong" criticisms of my negative experience
I hate hearing this because there are plenty of people writing blog posts or making youtube videos about how they are 10000x-ing their workflow. I think most of those people are completely full of it. I do believe it can be done (managing multiple Claude Code or similar instance running) but it turns you into a code reviewer and because you've already ceded so much control to the LLM it's easy to fall into the trap of thinking "One more back and forth and the LLM will get it" (cut to 10+ back and forths later when you need to pull the ripcord and reset back to the start).
Copilot and short suggestions (no prompting from me, just it suggesting in-line the next few lines) are the sweet spot for me. I fear many people are incorrectly extrapolating LLM capability. "Because I prompted my way to a POC then clearly an LLM would have no problem adding a simple feature to my existing code base" - Not so, not by a longshot.