The silent death of Good Code
amit.prasad.me68 points by amitprasad 4 hours ago
68 points by amitprasad 4 hours ago
This is something I've been thinking about as I start to adopt more agent-first coding.
There is a real advantage to having good code especially when using agents. "Good Code" makes iteration faster, the agent is unlikely to make mistakes and will continue to produce extensible code that can easily be debugged (by both you and the agent).
A couple months ago I refactored a module that had gotten unweildly, and I tried to test if Claude could add new features on the old code. Opus 4.5 just could not add the feature in the legacy module (which was a monster function that just got feature-crept), but was able to completely one shot it after the refactor.
So there is clear value in having "clean code", but I'm not sure how valuable it is. If even AGI cannot handle tech debt, then there's value is at least building scaffolding (or atleast prompting the scaffolding first). On the other hand there may be a future where the human doesn't concern himself with "clean code" at all: if the value of "clean code" only saves 5 minutes to a sufficiently advanced agent, the scaffolding work is usefuless.
My reference is assembly - I'm in my early 30s and I have never once cared about "clean" assembly. I have cared about the ASM of specific hot functions I have had to optimize, but I've never learned what is proper architecture for assembly programs.
Right: Having "Good Code" is an investment into future velocity.
IMO we shouldn't strive to make an entire codebase pristine, but building anything on shaky foundations is a recipe for disaster.
Perhaps the frontier models of 2026H2 may be good enough to start compacting and cleaning up entire codebases, but with the trajectory of how frontier labs suggest workflows for coding agents, combined with increasing context window capabilities, I don't see this being a priority or a design goal.
[flagged]
> Is there anything you really have to get done regardless of quality right this second?
A vast number of things. There are a vast number of things I will accept having done in even mediocre quality, as in the old pre-AI world, I would never get to them at all.
Every friend with a startup idea. Every repetitive form I have to fill out every month for compliance. Just tooling for my day to day life.
Please name a single "startup" you've shipped from vibe coding thats gotten a paying customer that you didnt know IRL.
Everyone thinks they're a startup founder now is a major part of the problem. Y'all are falling for the billionaire marketing. Anything that can be built with vibe coding for a startup can be instantly copied by someone else. This makes no sense.
>Why would you adopt agent first coding?
Since most work on software projects is going to be done via coding, debugging, QA, etc AI agents you should prioritize finding ways to increase the velocity of these AI agents to maximize the velocity of the project.
>Are you that bad at it?
That is irrelevant.
>Is there anything you really have to get done regardless of quality right this second?
You are implying that AI agents have low quality work, but that is not the case. Being able to save time for an equivalent result is a good thing.
>Just write the code yourself, and stop training your replacement.
AI labs are the ones doing the training better AI.
> That is irrelevant.
Why?
That commenter is try is trying to imply that AI agents are a form of crutch. Like if you are bad at programming you use an AI agent to program for you. In reality programmers of all skill levels are migrating to using AI agents for programming.
Just because a high skilled programmer can use an LLM with some effectiveness doesnt mean someone with less skills will be able to match their ability. You LLM-kiddies are worse than nft people in 2021 claiming they're making art.
I really cant wait until the inference providers 5x their prices and you guys realize you're completely incompetent and that you've tied your competency 1:1 to the quality and quantity of tokens you can afford. You're going to be a character from the movie idiocracy.
Of course you'll still be coping by claiming you're so productive using some budget kimi 2.5 endpoint that consumes all your personal data and trains on your inputs.
Good code was approximately never valued in enterprise. How many companies worth billions or even trillions have webpages that take 5+ seconds to load text, and use Electron for their desktop applications? In that regard, nothing has changed.
There is still a market for good code in the world, however. The uses of software are nearly infinite, and while certain big-name software gets a free pass on being shitty due to monopoly and network effects, other types of software will still find people who will pay for them if they are responsive, secure, not wildly buggy, and can add new features without a 6 month turnaround time because the codebase isn't a crime against humanity.
On another note, there have been at least four articles on the front page today about the death of coding. As there are every other day. I know I'm tired of reading them, but don't people get bored of writing them?
> I know I'm tired of reading them, but don't people get bored of writing them?
I understand the sentiment here but it shouldn't be surprising that people are upset that their profession and livelihoods are being drastically changed due to advances in AI.
So funny when people point at electron as if it singlehandedly makes every program unusable.
Also, I would assume there are not many significant pages on $B/Trillion companies that take 5 seconds to load text that are used frequently.
> I know I'm tired of reading them, but don't people get bored of writing them?
People never get tired of reading or commenting on commentary on their hobbies.
New Reddit and Outlook.com are two off the top of my head. It is not uncommon to be looking at a spinner for several seconds. There are other websites that are not primarily for text but are still insane. Twitch.TV, an old favorite of mine, now routinely takes 10+ seconds despite having Amazon money behind it. Youtube routinely takes several seconds to load the page, which is still unacceptable even for a video website. These sites are maintained by FAANG-tier engineers being paid mid-high 6 figures or 7 figures, who I'm sure are mostly perfectly competent, and yet they are completely dysfunctional because enterprise environments inevitably create structural disincentives to producing good code.
I use Electron applications. They are usable, for some value of the word. I am certainly not happy about it, though. I loathe the fact that I have 32GB RAM and routinely run into memory issues on a near-daily basis that should literally never happen with the workloads I'm doing. With communication-based apps like Slack and Discord where your choice of software to use comes down entirely to where the people you're communicating are, you will use dogshit because there is no point to communicating to the void on a technically superior platform.
> I know I'm tired of reading them, but don't people get bored of writing them?
Look, it's either this or a dozen articles a day about Claude Code.
Good code is extremely subjective, most bad code is built on a good code foundation. And most foundational software (think linux, ffmpeg, curl, v8, etc.) maintainers are pushing back.
Once AI/Agents actually master all tools we currently use (profilers, disassembly, debuggers) this may change but this won't be for a few years.
IMO, you need to have the capacity to write Good Code to know what Good Enough Code is. It's highly contextual to a particular problem and season in a codebase's life. One example: ugly code that upholds an architecture that confers conceptual leverage on a problem. Most of the code can operate as if some gnarly problem is solved without having to grapple with it themselves. Think about the virtual memory subsystem of an OS.
The problem with this argument is many do not believe this sort of leverage is possible outside of a select few domains, so we're sort of condemned to stay at a low level of abstraction. We comfort ourselves by saying it is pragmatic.
LLMs target this because the vast, vast majority of code is not written like this, for better or for worse. (It's not a value judgment, it just is.) This is a continuation (couldn't resist) of the trend away from things like SICP. Even the SICP authors admitted programming had become more about experimentation and gluing together ready-made parts than building beautifully layered abstractions which enable programs to just fall out of easily.
I don't agree with the author, BTW. Good code is needed in certain things. It's just a lot of the industry really tries to beat it out of you. That's been the case for awhile. What's different now is that devs themselves are seemingly joining in (or at least, are being perceived to be).
> IMO, you need to have the capacity to write Good Code to know what Good Enough Code is. I completely agree, and its one of the biggest problem of trying to talk about "how you use agents". A lot of the people that may use the same agents with the same workflow may see wildly different results depending on their ability to evaluate the end result.
> The problem with this argument is many do not believe this sort of leverage is possible outside of a select few domains, so we're sort of condemned to stay at a low level of abstraction.
I think theres a similar tangential problem to consider here: people don't think that they are the person to create the serious abstraction that saves every future developer X amount of time because its so easy to write the glue code every time. A world where every library or API was as well thought out as the virtual memory subsystem would be an overspecified but at the same time enable creations far beyond the ones seen today (imo).
> Even the SICP authors admitted programming had become more about experimentation and gluing together ready-made parts than building beautifully layered abstractions which enable programs to just fall out of easily.
I think good code is even more important now.
People talk about writing the code itself and being intimate with it and knowing how every nook and cranny works. This is gone. It’s more akin to on call where you’re trudging over code and understanding it as you go.
Good code is easy to understand in this scenario; you get a clear view of intent, and the right details are hidden from you to keep from overwhelming you with detail.
We’re going to spend a lot more time reading code than before, better make it a very good experience.
none of this even kind of addresses why the article implies that people stopped writing good code. why are we going to spend "a lot more reading code than before"? is this an ai generated comment?
The author effectively argues deep thinking is dead, that people are no longer going to take the time to understand the problem and solution space before they solve it.
I think that’s untrue, I think it’s /more/ important than before. I think you’re going to have significantly more leverage with these tools if you’re capable of thinking.
If you’re not, you’re just going to produce garbage extremely fast.
The use of these tools does not preclude you from being the potter at the clay wheel.
Just because you or I may invest effort into deep-thinking, it does not mean that others will.
I'm not worried about this at Modal, but I am worried about this in the greater OSS community. How can I reasonably trust that the tools I'm using are built in a sound manner, when the barrier to producing good-looking bad code is so low
> How can I reasonably trust that the tools I'm using are built in a sound manner, when the barrier to producing good-looking bad code is so low
Honest answer: You never could.
Hm? The article is pretty clear about two claims, IMO: (1) good code has been rare for a long time because the job is a pragmatic one and not a philosophical one but that sometimes "good code" pays off down the line, and (2) possibly the "pays off down the line" will be less important in the future with AI coding tools.
And the comment by 'ElatedOwl is pretty directly responding to that second idea.
LLMs also make refactoring for readability, simplicity and performance far easier.
Nothing has fundamentally changed! A good solution is a good solution.
I do worry that the mental health of developers will take a downturn if they’re forced into a brain rotting slop shovelling routine, however.
So yes readability and good concise code is still important.
No one likes good code because it takes a lot of upfront time.
- PMs hate it because you're busy putting up scaffolding instead of painting
- Managers hate it because they have to cover for it
- Other engineers hate it because they could be doing it better
- VPs and directors hate it because they can't think beyond the release cycle, so the engineer is an architecture astronaut who should focus
There is basically no reward for actually putting thought into a programming solution anymore. The incentives are aligned against it unless you can get your manager to run interference for you.
I love the sentiment, but 40 years in the business realm of software development has taught me “good code” is never a priority for management. It’s difficult to explain good unit testing, tech debt, or just going through proper solution selection with management.
So having used Claude Code since it came out I’ve decided the resulting code is overall just as good as what I’d see in regular programming scenarios.
Let management argue there case, don't do it for them.
I am management, but now also in front of delivery because I know how to construct software.
Good code has always been written with a reader in mind. The compiler understanding it was assumed. The real audience was other engineers. We optimized for readability because it made change easier and delivered business value faster.
That audience is changing. Increasingly, the primary reader is an agent, not a human. Good code now means code that lets agents make changes quickly and safely to create value.
Humans and agents have very different constraints. Humans have limited working memory and rely on abstraction to compress complexity. Agents are comfortable with hundreds of thousands of tokens and can brute-force pattern recognition and generation where humans cannot.
We are still at the start of this shift. Our languages and tools were designed for humans. The next phase is optimizing them for agents, and it likely will not be humans doing that optimization. LLMs themselves will design tools, representations, and workflows that suit agent cognition rather than human intuition.
Just as high-level languages bent machine code toward human needs, LLMs let us specify intent at a much higher level. From there, agents can shape the underlying systems to better serve their own strengths.
For now, engineers are still needed to provide rigor and clearly specify intent. As feedback loops shorten, we will see more imperfect systems refined through use rather than upfront design. The iteration looks less like careful planning and more like saying “I expected you to do ABC, not XYZ,” then correcting from there.
I really reasonate with this post, I too appreciate "Good Code"(tm). In a discussion on another forum I had a person tell me that "Reading the code that coding agents produce is like reading the intermediate code that compilers produce, you don't do that because what you need to know is in the 'source.'"
I could certainly see the point they were trying to make, but pointed out that compilers produced code from abstract syntax trees, and the created abstract syntax trees by processing tokens that were defined by a grammar. Further, the same tokens in the same sequence would always produce the same abstract syntax tree. That is not the case with coding 'agents'. What they produce is, by definition, an approximation of a solution to the prompt as presented. I pointed out you could design a lot of things successfully just assuming that the value of 'pi' was 3. But when things had to fit together, they wouldn't.
We are entering a period where a phenomenal amount of machine code will be created that approximates the function desired. I happen to think it will be a time of many malfunctioning systems in interesting and sometimes dangerous ways.
> you could design a lot of things successfully just assuming that the value of 'pi' was 3. But when things had to fit together, they wouldn't.
Apt analogy. I’m gonna steal it!
After reading a bunch of blog posts like this, I'm now kind of glad to see "good code" go away and am also glad to pour more gasoline on the flames of fire burning away at such code, so to speak
I think "good code" t was a "nice" pursuit but became too much of an end in itself while code was always - for me - just a means to create something that "just werks"
But I'm not sure the "good code" fans need to worry because they might be able to obsess over "proper prompting" and the "correct way to use agents" or "appopriate ai tooling" or something like that on this next wave of "code creation"
I wish it was silent, we've been hearing about it non-stop for the past 4 years.
I agree it is sad though. I changed careers from one I was unhappy with into software development. Part of what drew me to software was that (at least sometimes) it feels like there is a beauty in writing what the author describes as great code. It makes you really feel like a 'master craftsman', even if that sounds a bit dramatic. That part of the profession seems to fading away the more agentic coding catches on. I still try to minimize use of any LLM's when doing personal projects so I can maintain that feeling.
> Good Cirquits
Afaic, people designing circuits still do care about that.
> Good Assembly
The thing with the current state of coding is that we are not replacing "Coding Java" with something else. We are replacing it with "Coding Java via discussion". And that can be fine at times but it still is a game of diminishing returns. LLMs still make surprising mistakes, they too often forget specifics, make naive assumptions and happily follow along local minima. All of the above lead to inflated codebases in the long run which leads to bogged down projects and detached devs.
> This same colleague then invested time into understanding the kernel subsystem, the exact reasons why the original C program was written how it was, and rewrote the Rust translation himself. The difference was night and day; the code flowed naturally, explained itself and the underlying subsystems, and may genuinely be some of the nicest parts of the entire codebase.
This is the point that everybody needs to calm down and understand. LLMs are fantastic for POCs _which then get rewritten_. Meaning: the point is to rewrite it, by hand. Even if this is not as fast as shipping the POC and pretending everything is ok (don't do this!) it still drastically speeds up the software engineering pipeline and has the potential to increase Good Code overall.
A perfectly reasonable rule in software organizations is: For greenfield code, LLMs are strictly required for 1st-pass prototyping (also required!). And then: Hand writes (within reason) for production code. Your company will not lose their competitive edge following this guideline, and this includes your hard-earned skills.
I'm not sure how this guideline makes sense. LLMs are great at dumb things I shouldn't have to type but can be well defined before they write something.
This statement, makes almost zero sense - A perfectly reasonable rule in software organizations is: For greenfield code, LLMs are strictly required for 1st-pass prototyping (also required!). And then: Hand writes (within reason) for production code. Your company will not lose their competitive edge following this guideline, and this includes your hard-earned skills.
"Give me a proxy, written in go, that can handle jwt authentication" isn't your traditional crud stuff, but Claude answers that quite well.
That sounds nice in theory but how many managers are going to tolerate a rewrite when there is something "good enough" sitting in front of them? (They can't see the tech debt and the vulnerabilities, just that it Apparently Does The Thing.)
I've found that Good Code is actually actively detrimental to agent performance. I suspect agent-written code is very comprehensible to agents(for example, agents love to define single-use variables because it lets them document the code without adding comments, having to read whole files, or understand novel code patterns(complex pipeline statements, for example) but is a nightmare to read. You have to keep the meanings of all the small variables in your head, so your short term memory gets overloaded with small pieces of info. I tried making the agent refactor to reduce these, but noticed a substantial increase in how often it misunderstands what the code does.
I think what you are finding is that people's definition of "Good Code" differs the same way two people's definition of "good food" differs.
Man am I getting tired of these articles and we can do without this neurotic melancholic whining. Maybe it is the title of the article that triggered me, but it reminded me of hearing Douglas Murray read excerpts from "The Strange Death of Europe" in his self-aggrandising pompous tone.
The authors colleague needed a couple of tries to write a kernel extension and somehow this means something about programming. If it was not for LLMs I would not have gone back to low-level programming, this stuff is actually getting fun again. Lets check the assembly the compiler produced for the code the LLM produced.
To be clear, I am also having the most fun I've had when it comes to side-projects and even more exploratory things at work. I don't derive all my joy from "Good Code" -- that's silly! I would much rather ship tangible products and features and/or tackle things at home that I wouldn't otherwise.
On the other hand, the other responsibilities of being an engineer have become quite a bit less appealing.
I've come accept that producing code I'm truly proud of is now my hobby, not my career. The time it takes to write Good Code is unjustifiable in a business context and I can't make the case for it outside of personal projects.
Hilarious. The code being produced previously was crap - it was just your crap. Baseline agents produce something similar, but can at least be guided, durably, towards producing less crap code.
This feels very odd to me, because I'm actually able to refactor and DRY and generally improve my code and tests and documentation much more with agents to help speed up the process than I ever would have before.
I also make sure to describe and break down problems when I ask an agent to implement them in such a way that they produce code that I think is elegant.
It seems to me like people think there's only two settings: either slaving away carefully on the craft of your code at a syntactic level, manually writing it, or shitting out first pass vide-coded slop without taking care to specify the problem or iterate on the code afterwards. But you can apply just as much care to what the agent produces, and in my experience, still see significant speedups, since refactoring and documentation and pulling out common abstractions and so on are something that agents can do extremely reliably and quickly, but otherwise require a lot of manual text editing and compile/test passes to do yourself.
As long as you don't get hung up on making the agent produce exactly character for character, the code you would have produced, but instead just have good standards for functionality and cleanliness and elegance of design.
> Either sleeping away carefully on the craft of your code manually writing it, or shitting out first pass vide-coded stuff without really taking care to specify the problem or iterate on the code afterwards.
I think the thing you are missing is that people are
> shitting out first pass vide-coded stuff without really taking care to specify the problem or iterate on the code afterwards
to assume that people will take a path other than the path of least resistance now when they never did before, such as copy-pasting directly from stackoverflow without understanding the implications of the code.
But that's kind of my point — there's still a choice whether to care about the quality of your code and spend time refining it or not with agentic coding as with any other technology; people who took the time to write good code before can absolutely continue to do that, and people who didn't care about good code before will not care about good code now either. It was a choice that was not the path of least resistance before, and it is still a choice that is not the path of least resistance now.
Now, there is the very valid point that those that don't care about code quality can now churn it out at vastly accelerated rates, but that didn't really feel like what the original article was talking about. It felt like it was specifically trying to make the claim that a Gentic tools don't really afford the ability to refine or improve your code or strongly discourage it, such that you kind of can't care about good code anymore. And it's that that I wanted to push back on.
There has always been a tension between "take the time to build something you know will work" and "prioritize speed over all else and hope you get lucky and it doesn't fall over too fast" in software. AI is making the difference in speed between the two schools of thought larger and larger, and it's almost certain to make the latter philosophy more financially attractive.
Yeah, I can see that. But that didn't really feel like what the original article was arguing. It felt more that it was arguing that even people who care about good code, if they use agentic tools at all, can't produce good code, and it was the advantage in the velocity of agentic tools as a whole over the production of good code as a strictly separate category that was the problem?
Opus is quite good at refactoring. Also, we can finally have all the helper functions/beautiful libraries/tests that we always wanted to have. There is no excuse anymore to approximate a parser with regular expressions. Or to not implement the adapter class which makes an ugly unchangeable interface beautiful.
I believe the right use of AI makes it possible to write more beautiful code than ever before.
I would be extremely happy to be proven wrong! I love using agents for exploratory prototypes as well as "rote" work, but have yet to see them really pay off when dealing with existing tech debt.
I find that the flaws of agentic workflows tend to be in the vein of "repeating past mistakes", looking at previous debt-riddled files and making an equivalently debt-riddled refactor, despite it looking better on the surface. A tunnel-vision problem of sorts
Agents can write good code. If you don't like the way that they write code, tell them to write it differently. Do that until you think the code is good.
There's an opportunity-cost here. I use agents to be more productive. As many have noted, "Good Code" doesn't rank highly compared to actually shipping a product.
The tragedy, for me, is that the bar has been lowered. What I consider to be "good enough" has gone down simply because I'm not the one writing the code itself, and feel less attachment to it, as it were.
Doesn’t the question then become “is there still an objective advantage to good code”
If the answer is yes then it’s a tragedy - but one that presumably will pass once we collectively discover it. If not, then it’s just nostalgic.
It's hard to say. Perhaps LLMs of tomorrow will become capable enough to fix the mistakes of LLMs today. If so, great -- I'm worried about nothing.
If not, we could see that LLMs of tomorrow struggle to keep up with the bloat of today. The "interest on tech debt" is a huge unknown metric w.r.t. agents.
This. There’s no limitation to your prompting. If you feed rules and patterns for clean code to a bunch of agents they’ll happily work on that level.
Just right now no one cares enough yet. Give it a year or two.
I could conceive something evolving on a different abstraction layer - say, clean requirements and tests, written to standard, enhanced with “common sense”
I run a company called Good Code and was quite worried for half a second!
There's just no longer any value in good code, just like there's no value in Mel Kaye's beautiful hand-assembled programs:
https://users.cs.utah.edu/~elb/folklore/mel.html
But now, reading, understanding, and maintaining the software is the job of coding agents. You are free to do the interesting work: setting goals and directions for the agents, evaluating the finished product, communicating with stakeholders, etc. This has always been the hard and interesting part of systems design: solving real-world problems for people and businesses.
the rise of "good enough" was the death of "good code"
If it’s easy to read and understand but doesn’t work, or is slow to execute, or costs a lot to run, is it good code?
If the function is a black box, but you’re sure the inputs produces a certain output without side effects and is fast, do you NEED “good code” inside?
After about 10yrs of coding, the next 10 of coding is pretty brainless. Better to try and solve people/tech interaction problems than plumbing up yet-another-social/mobile/gaming/crypto thing.
The worst part of vibe coding, and developers as managers of "agents"
AI is at best a good intern or a new junior developer. We're locking in mediocrity and continuing enshittification. "Good enough" is codified as good enough, and nothing will be good or excellent. Non-determinism, and some amount of inaccuracy on the margins continually, no matter the industry or task at hand including finance, just so we can avoid paying a person to do the job
The good thing is that now instead to spend 6 months - 1 year working on your app and making sure the code is just perfect you spend just a 1 or two , generate some garbage code but at least publish your app before you burn out. Once it gains some traction like it or not you gonna have to "fix the code" as well. I feel that AI is really a blessing for good programmers/coders because they start to focus more on the business of what they build than on the code.
We’re not locking in anything, “at best a good intern or new junior developer” was maybe true at like sonnet 4 and earlier. Code is not codified it’s living. Models of tomorrow will correct the model outputs of today. At some point alarmingly soon, no one will read code just like nobody reads the assembly output of a C compiler.
Non determinism and inaccuracy are also very real features of human programmers.
However if the 'non determinism and inaccuracy' of LLMs is more pathogenic than that of humans, then we have a problem. Pathogenesis is inherently a system level effect, so it may take a little time (and money!) to become evident.
yeah a lot of people are just coping. If someone wants to become better or more productive, invest in engineering guardrails and verifications validation layers.
There are thousand of examples where tech became obsolute and frankly it’s given. No coders opinion will change it, but everybody is free to do what ever hobby the want. Author does seem to accept it, but commentor above not.
That's not even remotely close to true anymore. Agents are far better than any intern or junior developer.
The silver lining is realizing that many of my mgmt never cared about good code or quality to begin with. So I was fooling myself. The AI/LLM excitement just makes it more obvious now.
So much this. No one ever really cared about “good code” except some engineers who took an irrational amount of pride in their code.
Yes, just the people who actually made a difference in our profession, as opposed to producers of slop and corporate shit.
As civilizations declined, pride in one's work, would have been more or less as described in this comment.
However a lot of the modern world is carried by 'pride of workmanship' - and not just by those who 'make'. It's an extension of the 'planting trees' parable, to care about things even though you are not immediately (or ever) rewarded.
If you did it for management yes.
But that's soviet bureucracy and Potempkin villages with extra steps.