Coding after coders: The end of computer programming as we know it?
nytimes.com109 points by angst 2 days ago
109 points by angst 2 days ago
Other gift link: https://www.nytimes.com/2026/03/12/magazine/ai-coding-progra...
> in coding, L.L.M.s take away the drudgery and leave the human, soulful parts to you.
I've always hated solving puzzles with my deterministic toolbox, learning along the way and producing something of value at the end.
Glad that's finally over so I can focus on the soulful art of micromanaging chatbots with markdown instead.
Having an AI is like having a dedicated assistant or junior programmer that sometimes has senior-level insights. I use it to do tedious tasks where I don't care about the code - like today I used it to generate a static web page that let me experiment with the spring-ai chat bot code I was writing - basic. But yesterday it was able to track down the cause of a very obscure bug having to do with a pom.xml loading two versions of the same library - in my experience I've spent a full day on that type of bug and Claud was able to figure it out from the exception in just minutes.
But when I've used AI to generate new code for features I care about and will need to maintain it's never gotten it right. I can do it myself in less code and cleaner. It reminds me of code in the 2000s that you would get from your team in India - lots of unnecessary code copy-pasted from other projects/customers (I remember getting code for an Audi project that had method names related to McDonalds)
I think though that the day is coming where I can trust the code it produces and at that point I'll just by writing specs. It's not there yet though.
I've generated 250KLoC this week, absolutely no changes in deps or any other shenanigans. I'm not even really trying to optimize my output. I work on plans/proposals with 2 or 3 agents simultaneously in Cursor while one does work, sometimes parallelized. I can't do that in less code and cleaner. I can't do it at all. Don't wait too long.
> I've generated 250KLoC this week
It's horrifying, all right, but not in the way you think lol. If you don't understand why this isn't a brag, then my job is very safe.
For any non professional work its there for me.
Wire up authentication system with sso. done Setup websockets, stream audio from mic, transcribe with elvenlabs. done.
Shit that would take me hours takes literally 5 mins.
All that stuff would take me about 5 minutes without AI. Those are things with 10,000 examples all over the web. AI is good at distilling the known solutions. But anything even slightly out of the ordinary, it fails miserably. I'd much rather write that code myself instead of spend an hour convincing an AI to do it for me.
There is absolutely no way. Those tasks take 5 mins to do. Itd be done by the time you read the documentation for elvenlabs
This is the take when you haven't really tried driving these tools with much practice
If coding truly becomes effortless to produce - and by that extension a product becomes near free to produce - then I find it quite odd that the executive class thinks their businesses won’t be completely up ended by a raging sea of competition.
Back in the day, programming was done on punch cards. In 20 years, that's how kids will see typing out lines of program code by hand.
At the rate things have been going, that is likely to happen in 20 days rather than 20 years.
how many times in the history of computer programming has there been an end to computer programming as we know it, successfully, and how many times predicted?
I can think of one successfully, off hand, although you could probably convince me there was more than one.
the principle phrase being "as we know it", since that implies a large scale change to how it works but it continues afterwards, altered.
Off the top of my head, I can think of the following during my career:
1. COBOL (we actually did still use it back in the 80s)
2.AI back in the 80s (Dr. Dobbs was all concerned about it ...)
3. RAD
4. No-Code
5. Off-shoring
6. Web 2.0
7. Web 3.0
8. possibly the ADA/provably correct push depending on your area of programming
TBH - I think the AI's are nice tools, but they got a long way to go before it's the 'end of computer programming as we know it'edit: formatting
Stack overflow (and internet in general) changed the programming as we (at least some of us) knew it.
When I was learning programming I had no internet, no books outside of library, nobody to ask for days.
I remember vividly having spent days trying to figure out how to use the stdlib qsort, and not being able to.
Hmm - I'm not sure I'd say that 'changed programming' - but the internet in general changed 'learning to program'. I can remember when I first discovered gopher and found I could read tons recent material for free, or finding stonybrook on the web - that was like a gold mine of algorithms! :-D
I'm not normally a fan of the NYT but this wasn't too bad. It passed the Gel-Mann test, and is clearly written by someone who knows the field well, even though the selection of quotes skews to towards outliers -- I think Yeggie for instance is pretty far out of the mainstream in his views on LLMs, whether ahead or sideways.
As a result a lot of the responses here are either quibbles or cope disguised as personal anecdotes. I'm pretty worried about the impact of the LLMs too, but if you're not getting use out of them while coding, I really do think the problem is you.
Since people always want examples, I'll link to a PR in my current hobby project, which Claude code helped me complete in days instead of weeks. https://github.com/igor47/csheet/pull/68 Though this PR creates a bunch of tables, routes, services -- it's not just greenfield CRUD work. We're figuring out how to model a complicated domain (the rules to DnD 5e, including the 2014 and the 2024 revisions of those rules), integrating with existing code, thinking through complex integrations including with LLMs at run time. Claude is writing almost all the code, I'm just steering
> it could also be that these software jobs won’t pay as well as in the past, because, of course, the jobs aren’t as hard as they used to be. Acquiring the skills isn’t as challenging.
This sounds opposite to what the article said earlier: newbies aren’t able to get as much use out of these coding agents as the more experienced programmers do.
This article is ragebaiting people and it's an embarrassing piece from the NYT.
Conversations of the future...
"Can you believe that Dad actually used to have to go into an office and type code all day long, MAUALLY??! Line by line, with no advice from AI, he had to think all by himself!"
I was thinking about that recently. Maybe decades from now people will look at things like the Linux kernel or Doom and be shocked that mere humans were able to program large codebases by hand.
This was literally part of the premise of The Jetsons. George's job was to press a single button while the computer RUDI did all the work.
The difference is, Jetsons wasn't a dystopia (unlike the current timeline), so when Mr. Spacely fired George, RUDI would take his side and refuse to work until George was re-hired.
> "Can you believe that Dad actually used to have to go into an office and type code all day long, MAUALLY??! Line by line, with no advice from AI, he had to think all by himself!"
Grumpy old man: "That's exactly why our generation was so much smarter than today's whippersnappers: we were thinking from morning to night the whole long day."
More likely:
"Dad, I've sent out 1000 applications and haven't had a call back. I can't take it anymore. Has it always been like this?"
The Dad: It's not my fault!
Because they are still making the same salary. In 5 years, when their job is eliminated, and they can't find work, they will regret their decision.
Their decision to... use AI for coding?
Well, their position on AI.
By their own accounts they are just pressing enter.
You have to hold AI hand to do even simple vanilla JS correctly. Or do framework code which is well documented all over the net. I love AI and use it for programming a lot, but the limitations are real.
The other day I (well, the AI) just wrote a Rust app to merge two (huge, GB of data) tables by discovering columns with data in common based on text distance (levenshtein and Dice) . It worked beautifully
An i have NEVER made one line of Rust.
I dont understand nay-sayers, to me the state of gen.AI is like the simpsons quote "worst day so far". Look were we are within 5 years of the first real GPT/LLM. The next 5 years are going to be crazy exciting.
The "programmer" position will become a "builder". When we've got LLMs that generate Opus quality text at 100x speed (think, ASIC based models) , things will get crazy.
Human minds are built to find patterns, and you should be careful not to assume the rate of improvement will continue forever based on nothing but a pattern.
Just the fact that even retail quality hardware is still improving at local LLM significantly is still a great sign. If AI quality remained the same, and the cost for local hardware dropped to $1000, it would still be the greatest thing since the internet IMO. So even if the worst happens and all progress stops, I'm still very happy with what we got.
>I'm still very happy with what we got
"One person's slop is another person's treasure"
I'm not all that impressed with "AI". I often "race" the AI by giving it a task to do, and then I start coding my own solution in parallel. I often beat the AI, or deliver a better result.
Artificial Intelligence is like artificial flavoring. It's cheap and tastes passable to most people, but real flavors are far better in every way even if it costs more.
The overall trend in AI performance will still be up and to the right like everything else in computing over the past 50 years, improvement doesn't have to be linear
Let me explain the naysayers, they know "programmer" has always meant "builder" and just because search is better and you can copy and paste faster doesn't mean you've built anything.First thing people need to realize is no proprietary code is in those databases, and using Ai will ultimately just get you regurgitated things people don't really care about. Use it all you want, you won't be able to do anything interesting, they aren't giving you valuable things for free. Anything of value will still take time and knowledge. The marketing hype is to reduce wages and prevent competition. Go for it.
I must say, I do love how this comment has provoked such varying responses.
My own observations about using AI to write code is that it changes my position from that of an author to a reviewer. And I find code review to be a much more exhausting task than writing code in the first place, especially when you have to work out how and why the AI-generated code is structured the way it is.
> especially when you have to work out how and why the AI-generated code is structured the way it is.
You could just ask it? Or you don’t trust the AI to answer you honestly?
You're anthropomorphizing.
LLMs can't lie nor can they tell the truth. These concepts just don't apply to them.
They also cannot tell you what they were "thinking" when they wrote a piece of code. If you "ask" them what they were thinking, you just get a plausible response, not the "intention" that may or may not have existed in some abstract form in some layer when the system selected tokens*. That information is gone at that point and the LLM has no means to turn that information into something a human could understand anyways. They simply do not have what in a human might be called metacognition. For now. There's lots of ongoing experimental research in this direction though.
Chances are that when you ask an LLM about their output, you'll get the response of either someone who now recognized an issue with their work, or the likeness of someone who believes they did great work and is now defending it. Obviously this is based on the work itself being fed back through the context window, which will inform the response, and thus it may not be entirely useless, but... this is all very far removed from what a conscious being might explain about their thoughts.
The closest you can currently get to this is reading the "reasoning" tokens, though even those are just some selected system output that is then fed back to inform later output. There's nothing stopping the system from "reasoning" that it should say A, but then outputting B. Example: https://i.imgur.com/e8PX84Z.png
* One might say that the LLM itself always considers every possible token and assigns weights to them, so there wouldn't even be a single chain of thought in the first place. More like... every possible "thought" at the same time.
There's a very wide range of programming tasks of differing difficulty that people are using / trying to use it for, and a very wide range of intelligence amongst the people that are using / trying to use it, and who are evaluating its results. Hence, different people have very different takes.
Your always reviewing code though. Either a team mates pr or maybe your own code in 3 months, or some legacy thing.
Exactly that is also my experience also with Claude Code. It can create a lot of stuff impressively but with LOTS of more code than necessary. It’s not really effective in the end. I have more than 35 years of coding experience and always dig into the newest stuff. Quality wise it’s still not more than junior dev stuff even with latest models, sorry. And I know how to talk to these machines.
I don't have as many years of professional experience as you do, but IMO code pissing is one of the areas LLMs and "agentic tools" shine the least.
In both personal projects and $dayjob tasks, the highest time-saving AI tasks were:
- "review this feature branch" (containing hand-written commits)
- "trace how this repo and repo located at ~/foobar use {stuff} and how they interact with each other, make a Mermaid diagram"
- "reverse engineer the attached 50MiB+ unstripped ELF program, trace all calls to filesystem functions; make a table with filepath, caller function, overview of what caller does" (the table is then copy-pasted to Confluence)
- basic YAML CRUD
Also while Anthropic has more market share in B2B, their model seems optimized for frontend, design, and literary work rather than rigorous work; I find it to be the opposite with their main competitor.
Claude writes code rife with safety issues/vulns all the time, or at least more than other models.
This is not my experience either. If you put the work in upfront to plan the feature, write the test cases, and then loop until they pass... you can build a lot of high quality software quickly. The difference between a junior engineer using it and a great architect using it is significant. I think of it as an amplifier.
This honestly reads to me like "if you spend a lot of time doing tedious monotonous shit you can save a lot of time on the interesting stuff"
I have no interest being a "great architect" if architects don't actually build anything
> The difference between a junior engineer using it and a great architect using it is significant
Yes, juniors are trying to use AI with the minimum input. This alone tells a lot..
Not in my experience. But then again, lots of programmers are limited in how they use AI to write code. Those limitations are definitely real.
that's just not even remotely my experience. and i am ~20k hours into my programming career. ai makes most things so much faster that it is hard to justify ever doing large classes of things yourself (as much as this hurts my aesthetic sensibilities, it simply is what it is).
I've never seen a human estimate their "programming career" in kilohours. Is that supposed to look more impressive than years? So, you've been programming only about 7 years? I guess I'm at about "170 kilohours".
Part of this depends on if you care that the AI wrote the code "your way." I've been in shops with rather exotic and specific style guides and standards which the AI would not or will not conform to.
Yeah, I also highly value consistency in my projects, which forces me to keep an eye on the LLM and steer it often. This limits my overall velocity especially on larger features. But I'm still much faster with the agent. Recent example, https://github.com/igor47/csheet/pull/68 -- this took me a couple of hours pairing with Claude code, which is insane give the size of the work here. Though this PR creates a bunch of tables, routes, services -- it's not just greenfield CRUD work. We're figuring out how to model a complicated domain, integrating with existing code, thinking through complex integrations including with LLMs at run time. Claude is writing almost all the code, I'm just steering
then have ai write a deterministic transformation tool that turns it into the specific style and standard that is needed
Most of this thread is debating whether models are good or bad at writing code... however, I think a more important question is what we feed the AI with because that dramatically determines the quality of the output.
When your agent explores your codebase trying to understand what to build, it read schema files, existing routes, UI components etc... easily 50-100k tokens of implementation detail. It's basically reverse-engineering intent from code. With that level of ambiguous input, no wonder the results feel like junior work.
When you hand it a structured spec instead including data model, API contracts, architecture constraints etc., the agent gets 3-5x less context at much higher signal density. Instead of guessing from what was built it knows exactly what to build. Code quality improves significantly.
I've measured this across ~47 features in a production codebase with amedian ratio: 4x less context with specs vs. random agent code exploration. For UI-heavy features it's 8-25x. The agent reads 2-3 focused markdown files instead of grepping through hundreds of KB of components.
To pick up @wek's point about planning from above: devs who get great results from agentic development aren't better prompt engineers... they're better architects. They write the spec before the code, which is what good engineering always was... AI just made the payoff for that discipline 10x more visible.
AI assisted code can't even stick to the API documentation, especially if the data structures are not consistent and have evolved over time. You would see Claude literally pulling function after function from thin air, desperately trying to fulfill your complicated business logic and even when it's complete, it doesn't look neat at all. Yes, it will have test coverage, but one more feature request will probably break the back of the camel. And if you raise that PR to the rest of your team, good luck trying to summarise it all to your colleagues.
However if you just have an easy project, or a greenfield project, or don't care about who's going to maintain that stuff in 6 months, sure, go all in with AI.
I definitely wonder if the people going all-in on AI harnessing are working on greenfield projects, because it seems overwhelming to try to get that set up on a brownfield codebase where the patterns aren't consistent and the code quality is mixed.
So just iterate on it? Your complaint is that the model isn't one shotting the problem and reading your mind about style. It's like any coding workflow, make it work, then make it nice.
No, I never expect AI to one-shot (if I see such a miracle, it's usually because I needed a one-liner or something really simple and well documented, which I can also write on the whiteboard from memory).
Try iterating over well known APIs where the response payloads are already gigantic JSONs, there are multiple ways to get certain data and they are all inconsistent and Claude spits out function after function, laying waste to your codebase. I found no amount of style guideline documents to resolve this issue.
I'd rather read the documentation myself and write the code by hand rather than reviewing for the umpteenth time when Claude splits these new functions between e.g. __init__.py and main.py and god knows where, mixing business logic with plumbing and transport layers as an art form. God it was atrocious during the first few months of FastMCP.
It’s crazy how some people feel the ai and others don’t. But one group is wrong. It’s a matter of time before everyone feels the AI.
This is a very one-sided article, unashamedly so.
Where's the references to the decline in quality and embarrassing outages for Amazon, Microsoft, etc?
Do we know that it decreased the quality, or introduced more opportunities for bugs by simply increasing the velocity? If every commit has a fixed probability of having a bug, you'll run into more bugs in a week by going faster.
> Do we know that it decreased the quality, or introduced more opportunities for bugs by simply increasing the velocity?
That's an easy question to answer - you can look at outages per feature released.
You may be instead looking at outages per loc written.
AI is constantly trying to introduce bugs into my code. I've started disabling it when I know exactly where I'm going with the code, because the AI is often a lot more confused than I am about where the code is going.
Do we know it increased the velocity and didnt just churn more slop?
Even before AI the limiting factor on all of the teams I ever worked on was bad decisions, not how much time it took to write code. There seem to be more of those these days.
> “The reason that tech generally — and coders in particular — see L.L.M.s differently than everyone else is that in the creative disciplines, L.L.M.s take away the most soulful human parts of the work and leave the drudgery to you,” Dash says. “And in coding, L.L.M.s take away the drudgery and leave the human, soulful parts to you.”
This doesn’t really make sense to me. GenAI ostensibly removes the drudgery from other creative endeavors too. You don’t need to make every painstaking brushstroke anymore; you can get to your intended final product faster than ever. I think a common misunderstanding is that the drudgery is really inseparable from the soulful part.
Also, I think GenAI in coding actually has the exact same failure modes as GenAI in painting, music, art, writing, etc. The output lacks depth, it lacks context, and it lacks an understanding of its own purpose. For most people, it’s much easier to intuitively see those shortcomings of GenAI manifest in traditional creative mediums, just because they come more naturally to us. For coding, I suspect the same shortcomings apply, they just aren’t as clear.
I mean, at the end of the day if writing code is just to get something that works, then sure, let’s blitz away with LLMs and not bother to understand what we’re doing or why we do it anymore. Maybe I’m naive in thinking that coding has creative value that we’re now throwing away, possibly forever.
It's really time that mainstream media picks up on 'agentic coding' and the implications of writing software becoming a commodity.
I'm an engineer (not only software) by heart, but after seeing what Opus 4.6 based agents are capable of and especially the rate of improvement, i think the direction is clear.
I like 4.6 and agents based on it but can only qualify it as moderately useful.
It's all nonsense. It's just better search, intelligence in not artificial. They are trying to convince everyone that they don't need to pay programmers. That's all, all it is. It'll work on the ignorant who'll take less money to make sure it works and fix the bugs, which is mostly what they were paying for anyway. They just want to devalue the work of the people they are reliant on. Nothing new.
I think you’re a bit behind on your world view. Just because it’s inconvenient to you that non coders can now code, doesn’t make it untrue.
No, they can’t.
It has nothing to do with inconvenience.
I really like that layman now make these statements - they know better than people working in the industry for decades.
It's an accelerator. A great tool if used well. But just like all the innovations before it that were going to replace programmers it simply won't.
I used Claude just the other day to write unit test coverage for a tricky system that handles resolving updates into a consistent view of the world and handles record resurrection/deletion. It wrote great test coverage because it parsed my headerdoc and code comments that went into great detail about the expected behavior. The hard part of that implementation was the prose I wrote and the thinking required to come up with it. The actual lines of code were already a small part of the problem space. So yeah Claude saved me a day or two of monotonously writing up test cases. That's great.
Of course Claude also spat out some absolute garbage code using reflection to poke at internal properties because the access level didn't allow the test to poke at the things it wanted to poke at, along with some methods that were calling themselves in infinite recursion. Oh and a bunch of lines that didn't even compile.
The thing is about those errors: most of them were a fundamental inability to reason. They were technically correct in a sense. I can see how a model that learned from other code written by humans would learn those patterns and apply them. In some contexts they would be best-practice or even required. But the model can't reason. It has no executive function.
I think that is part of what makes these models both amazingly capable and incredibly stupid at the same time.
There is no such thing as "after coders": https://zjpea.substack.com/p/embarrassingly-solved-problems
This excerpt:
>A.I. had become so good at writing code that Ebert, initially cautious, began letting it do more and more. Now Claude Code does the bulk of it.
is a little overstated. I think the brownfield section has things exactly backwards. Claude Code benefits enormously from large, established codebases, and it’s basically free riding on the years of human work that went into those codebases. I prodded Claude to add SNFG depictions to the molecular modeling program I work on. It couldn’t have come up with the whole program on its own and if I tried it would produce a different, maybe worse architecture than our atomic library, and then its design choices for molecules might constrain its ability to solve the problem as elegantly as it did. Even then, it needed a coworker to tell me that it had used the incorrect data structure and needed to switch to something that could, when selected, stand in for the atoms it represented.
Also this:
>But A.I.-generated code? If it passes its tests and works, it’s worth as much as what humans get paid $200,000 or more a year to compose.
Isn’t really true. It’s the free-riding problem again. The thing about an ESP is that the LLM has the advantage of either a blank canvas (if you’re using one to vibe code a startup), or at least the fact that several possibilities converge on one output, but, genuinely, not all of those realities include good coding architecture. Models can make mistakes, and without a human in the loop those mistakes can render a codebase unmaintainable. It’s a balance. That’s why I don’t let Claude stamp himself to my commits even if he assisted or even did all the work. Who cares if Claude wrote it? I’m the one taking responsibility for it. The article presents Greenfield as good for a startup, and it might be, but only for the early, fast, funding rounds, when you have to get an MVP out right now. That’s an unstable foundation they will have to go back and fix for regulatory or maintenance reasons, and I think that’s the better understanding of the situation than framing Aayush’s experience as a user error.
Even so, “weirdly jazzed about their new powers” is an understatement. Every team including ours has decades of programmer-years of tasks in the backlog, what’s not to love about something you can set to pet peeves for free and then see if the reality matches the ideal? git reset --hard if you don't like what it does, and if you do all the better. The Cuisy thing with the script for the printer is a perfect application of LLMs, a one-off that doesn’t have to be maintained.
Also, the whole framing is weirdly self limiting. The architectural taste that LLMs are, again, free riding off of, is hard won by doing the work more senior engineers are giving to LLMs instead of juniors. We’re setting ourselves up for a serious coordinated action problem as a profession. The article gestures at this a couple times
The thing about threatening LLMs is pretty funny too but something in me wants to fall back to Kant's position that what you do to anything you do to yourself.
I spent ~6hrs with Claude trying to fix a web worker bug in a small JS code base Claude made. In the end it failed and I ran out of credits. Claude kept wanting to rip out huge blocks of code and replace entire functions. We never got any closer to a solution. The Claude hype is unreal. My 'on the ground' experience has been vastly different.
Yes, you can get a project with claude to a state of unrecoverable garbage. But with a little experience you can learn what it's good at and this happens less and less.
That isn't my experience. My code and bug tracker are public, so I have the privilege of being able to paste URLs to tickets into Claude Code with the prompt "what the fuck?" and it usually comes up with something workable on its own.
Regarding LLM's performances on brownfield projects, I thought of Naur's "Programming as Theory Building". He explains an example of a compiler project that is taken over by a team without guidance from the original developers:
> "at [the] later stage the original powerful structure was still visible, but made entirely ineffective by amorphous additions of many different kinds"
Maybe a way of phrasing it is that accumulating a lot of "code quality capital" gives you a lot more leverage over technical debt, but eventually it does catch up.
>but like most of their peers now, they only rarely write code.
Citation needed. Are most developers "rarely" writing code?
I'd expect that probably less than 10% of my time is spent actually writing code, and not because of AI, but because enough of it is spent analyzing failures, reading documents, participating in meetings, putting together presentations, answering questions, reading code, etc. And even when I have a nice, uninterrupted coding session, I still spend a decent fraction of that time thinking through the design of how I want the change rather than actually writing the code to effect that change.
For one thing comments here appear to apply to the quality and issues today not potentially going forward. Quality will change quicker than anyone expects. I am wondering how many people at HN remember when the first Mac came out with Mac Paint and then Pagemaker or Quark. That didn't evolve anywhere nearly as quickly as AI appears to be.
Also I am not seeing how anyone is considering that what a programmer considers quality and what 'gets the job done' (as mentioned in the article) matters in any business. (Example with typesetting is original laser printers were only 300dpi but after a short period became 1200dpi 'good enough' for camera ready copy).
Because we love tech? I'm absolutely terrified about the future of employment in this field, but I wouldn't give up this insane leap of science fiction technology for anything.
I love tech - tech that actually works well. The current tech we have for AI does not, so I'm not excited about it.
A really good pattern-matching engine is an "insane leap of science fiction"? It saves me a bit of typing here and there with some good pattern matching. Trying to get it to do anything more than a few lines gives me gibberish, or an infinite loop of "Oh, you're right, I need to do X, not Y", over and over - and that's Opus 4.5 or whatever the recent one is.
Would you give it access to your bank account, your 401k, trust it to sell your house, etc? I sure wouldn't.
"One such test for Python code, called a pytest"
The brain rot from the author couldn't even think of "unit test".
Why would you expect a reporter to magically know what a "unit test" is? Sounds like a simple miscommunication with one of his sources. Not perfect but not "brain rot".
Another trash article from the New York Times, who financially benefit from this type of content because of their ongoing litigation against OpenAI. I think the assumption that developers don't code is wrong. Most software engineers don't even want to code, they are opportunists looking to make money. I have yet to experience this cliff of coding. These people aren't asking for hard enough questions. I have a bunch of things I want AI to build that it completely fails on.
The article could have been written from a very different perspective. Instead, the "journalists" likely interviewed a few insiders from Big Tech and generalized. They don't get it. They never will.
Before the advent of ChatGPT, maybe 2 in 100 people could code. I was actually hoping AI would increase programming literacy but it didn't, it became even more rare. Many journalists could have come at it from this perspective, but instead painted doom and gloom for coders and computer programming.
The New York Times should look in the mirror. With the advent of the iPad, most experts agreed that they would go out of business because a majority of their revenue came from print media. Look what happened.
Understand this, most professional software and IT engineers hate coding. It was a flex to say you no longer code professionally before ChatGPT. It's still a flex now. But it's corrupt journalism when there is a clear conflict of interest because the NYT is suing the hell out of AI companies.
Agreed - just like the Fortune article talking about (Edit: Morgan Stanley, not GS) saying "the AI revolution is coming next year, and will decimate tons of industries, and no one is ready for it". They quote Altman and Musk. Gee - what did you expect from those two snake-oil salesmen?
Also the fact that NYT gives all their devs licenses to Cursor and Claude
What is a coder? Someone who is handed the full specs and sits down and just types code? I have never met such a person. The most annoying part of SWE is everyone who isn't an SWE has inane ideas about what we do.
Never worked on offshoring projects? That is exactly what the sweatshop coders do.
I think that the current AI tooling is a much bigger threat to offshore sweatshops than to domestic programmers.
Why deal with language barriers, time shifts, etc. when a small team of good developers can be so much more productive, allegedly?
It certainly is,
https://www.theregister.com/2026/01/19/hcl_infosys_tcs_wipro...
No we don't.
For one, I never saw a "full spec" (if such a thing even exists) back in my days of making 8k. Annually.
> The most annoying part of SWE is everyone who isn't an SWE has inane ideas about what we do.
I’ve tended to hold the same opinion of what the average SWE thinks everyone else does.
I keep getting stuck on the liability problem of this supposed "new world". If we take this as far as it goes: AI agent societies that designs, architects, and maintains the entire stack E2E with little to no oversight. What happens when rogue AIs do bad things? Who is responsible? You have to have fireable senior engineers that understand deep fundamentals to make sure things aren't going awry, right? /s
Check out the movie Brazil, if you haven't seen it already. Incredibly far ahead of its time.
[dead]