How to effectively write quality code with AI
heidenstedt.org151 points by i5heu 7 hours ago
151 points by i5heu 7 hours ago
I created my own Claude skill to enforce this and be sure it weaves in all the best practices we learned.
https://github.com/ryanthedev/code-foundations
I’m currently working on a checklist and profile based code review system.
The post touches very briefly on linting in 7. For me, setting up a large number of static code analysis checks has had the highest impact on code quality.
My hierarchy of static analysis looks like this (hierarchy below is Typescript focused but in principle translatable to other languages):
1. Typesafe compiler (tsc)
2. Basic lint rules (eslint)
3. Cyclomatic complexity rules (eslint, sonarjs)
4. Max line length enforcement (via eslint)
5. Max file length enforcement (via eslint)
6. Unused code/export analyser (knip)
7. Code duplication analyser (jscpd)
8. Modularisation enforcement (dependency-cruiser)
9. Custom script to ensure shared/util directories are not over stuffed (built this using dependency-cruiser as a library rather than an exec)
10. Security check (semgrep)
I stitch all the above in a single `pnpm check` command and defined an agent rule to run this before marking task as complete.
Finally, I make sure `pnpm check` is run as part of a pre-commit hook to make sure that the agent has indeed addressed all the issues.
This makes a dramatic improvement in code quality to the point where I'm able to jump in and manually modify the code easily when the LLM slot machine gets stuck every now and then.
(Edit: added mention of pre-commit hook which I missed mention of in initial comment)
My setup has some of the things mentioned and I found that occasionally the LLM will lie that something passes, when it doesn't.
Yup I have run into the same.
I use a pre-commit hook to run `pnpm check`. I missed mentioning it in the original comment. Your reply reminded me of it and I have now added it. Thanks.
I wonder at the end of this if it's the still worth the risk?
A lot of how I form my thoughts is driven by writing code, and seeing it on screen, running into its limitations.
Maybe it's the kind of work I'm doing, or maybe I just suck, but the code to me is a forcing mechanism into ironing out the details, and I don't get that when I'm writing a specification.
I second this. This* is the matter against which we form understanding. This here is the work at hand, our own notes, discussions we have with people, the silent walk where our brain kinda process errors and ideas .. it's always been like this since i was a kid, playing with construction toys. I never ever wanted somebody to play while I wait to evaluate if it fits my desires. Desires that often come from playing.
Outsourcing this to an LLM is similar to an airplane stall .. I just dip mentally. The stress goes away too, since I assume the LLM will get rid of the "problem" but I have no more incentives to think, create, solve anything.
Still blows my mind how different people approach some fields. I see people at work who are drooling about being able to have code made for them .. but I'm not in that group.
I'll push it back against this a little bit. I find any type of deliberative thinking to be a forcing function. I've recently been experimenting with writing very detailed specifications and prompts for an LLM to process. I find that as I go through the details, thoughts will occur to me. Things I hadn't thought about in the design will come to me. This is very much the same phenomenon when I was writing the code by hand. I don't think this is a binary either or. There are many ways to have a forcing function.
I think it's analogous to writing and refining an outline for a paper. If you keep going, you eventually end up at an outline where you can concatenate what are basically sentences together to form paragraphs. This is sort of where you are now, if you spec well you'll get decent results.
I agree, I felt this a bit. The LLM can be a modeling peer in a way. But the phase where it goes to validate / implement is also key to my brain. I need to feel the details.
> I see people at work who are drooling about being able to have code made for them .. but I'm not in that group.
people seem to have a inability to predict second and third order effects
the first order effect is "I can sip a latte while the bot does my job for me"... well, great I suppose, while it lasts
but the second order effect is: unless you're in the top 10%, you will now lose your job, permanently
and the third order effect is the economy collapses as it is built on consumer spending
I wonder over the long term how programmers are going to maintain the proficiency to read and edit the code that the LLM produces.
Everything you have said here is completely true, except for "not in that group": the cost-benefit analysis clearly favors letting these tools rip, even despite the drawbacks.
Maybe.
But it's also likely that these tools will produce mountains of unmaintainable code and people will get buried by the technical debt. It kind of strikes me as similar to the hubris of calling the Titanic "unsinkable." It's an untested claim with potentially disastrous consequences.
> But it's also likely that these tools will produce mountains of unmaintainable code and people will get buried by the technical debt.
It's not just likely, but it's guaranteed to happen if you're not keeping an eye on it. So much so, that it's really reinforced my existing prejudice towards typed and compiled languages to reduce some of the checking you need to do.
Using an agent with a dynamic language feels very YOLO to me. I guess you can somewhat compensate with reams of tests though. (which begs the question, is the dynamic language still saving you time?)
Companies aren't evaluating on "keeping an eye on technical debt", but then ARE directly evaluating on whether you use AI tools.
Meanwhile they are hollowing out work forces based on those metrics.
If we make doing the right thing career limiting this all gets rather messy rather quickly.
Tests make me faster. Dynamic or not feels irrelevant when I consider how much slower I’d be without the fast feedback loop of tests.
Static type checking is even faster than running the code. It doesn't catch everything, but if finding a type error in a fast test is good, then finding it before running any tests seems like it would be even better.
Oh I'm well aware of this. I admitted defeat in a way.. I can't compete. I'm just at loss, and unless LLM stall and break for some reason (ai bubble, enshittification..) I don't see a future for me in "software" in a few years.
Somehow I appreciate this type of attitude more than the one which reflects total denial of the current trajectory. Fervent denial and AI trash-talking being maybe the single most dominant sentiment on HN over the last year, by all means interspersed with a fair amount of amazement at our new toys.
But it is sad if good programmers should loose sight of the opportunities the future will bring (future as in the next few decades). If anything, software expertise is likely to be one of the most sought-after skills - only a slightly different kind of skill than churning out LOCs on a keyboard faster than the next person: People who can harness the LLMs, design prompts at the right abstraction level, verify the code produced, understand when someone has injected malware, etc. These skills will be extremely valuable in the short to medium term AFAICS.
But ultimately we will obviously become obsolete if nothing (really) catastrophic happens, but when that happens then likely all human labor will be obsolete too, and society will need to be organized differently than exchanging labor for money for means of sustenance.
I get crazy over the 'engineer are not paid to write loc', nobody is sad because they don't have to type anymore. My two issues are it levels the delivery game, for the average web app, anybody can now output something acceptable, and then it doesn't help me conceptualize solution better, so I revert to letting it produce stuff that is not maleable enough.
If the world comes to that it will be absolutely catastrophic, and it’s a failure of grappling with the implications that many of the executives of AI companies think you can paper over the social upheaval with some UBI. There will be no controlling what happens, and you don’t even need to believe in some malicious autonomous AI to see that.
Yep, its a rather depressing realization isnt it. Oh well, life moves on i suppose.
I think we realistically have a few years of runway left though. Adoption is always slow outside of the far right of the bell curve.
i'm sorry if I pulled everybody down .. but it's been many months since gemini and claude became solid tools, and regularly i have this strong gut feeling. i tried reevaluating my perception of my work, goals, value .. but i keep going back to nope.
I feel the same.
Frankly, I am not sure there is a place in the world at all for me in ten years.
I think the future might just be a big enough garden to keep me fed while I wait for lack of healthcare access to put me out of my misery.
I am glad I am not younger.
That's also how I feel.
I think you have every right to doubt those telling us that they run 5 agents to generate a new SAAS-product while they are sipping latté in a bar. To work like that I believe you'll have to let go of really digging into the code, which in my experience is needed if want good quality.
Yet I think coding agents can be quite a useful help for some of the trivial, but time consuming chores.
For instance I find them quite good at writing tests. I still have to tweak the tests and make sure that they do as they say, but overall the process is faster IMO.
They are also quite good at brute-forcing some issue with a certain configuration in a dark corner of your android manifest. Just know that they WILL find a solution even if there is none, so keep them on a leash!
Today I used Claude for bringing a project I abandoned 5 years ago up to speed. It's still at work in progress, but the task seemed insurmountable (in my limited spare time) without AI, now it feels like I'm half-way there in 2-3 hours.
I think we really need to have a serious think of what is "good quality" in the age of coding agents. A lot of the effort we put into maintaining quality has to do with maintainability, readability etc. But is it relevant if the code isn't for humans? What is good for a human is not what is good for an AI necessarily (not to say there is no overlap). I think there are clearly measurable things we can agree still apply around bugs, security etc, but I think there are also going to be some things we need to just let go of.
You can’t drop anything as long as a programmer is expected to edit the source code directly. Good luck investigating a bug when the code is unclear semantically, or updating a piece correctly when you’re not really sure it’s the only instance.
I think that's the question. Is a programmer expected to ever touch the source code? Or will AI -- and AI alone -- update the code that it generated?
Not entirely unlike other code generation mechanisms, such as tools for generating HTML based on a graphical design. A human could edit that, but it may not have been the intent. The intent was that, if you want a change, go back to the GUI editor and regenerate the HTML.
So like we went from assembler to higher level programming languages, we will now move to specifications for LLMs? Interesting thought... Maybe, once the "compilers" get good enough, but for mission critical systems they are not nearly good enough yet.
Right. I work in aerospace software, and I do not know if this option would ever be on the table. It certainly isn't now.
So I think this question needs to be asked in the context of particular projects, not as an industry-wide yes or no answer. Does your particular project still need humans involved at the code level? Even just for review? If so, then you probably ought to retain human-oriented software design and coding techniques. If not, then, whatever. Doesn't matter. Aim for whatever efficiency metric you like.
Then again, would anyone have guessed we’d even be seriously discussing this topic 10, 20, 40 years ago?
Maybe. This book from 1990
https://mitpress.mit.edu/9780262526401/artificial-intelligen...
envisions a future of AI assistance that looks not too far off from today.