Vibe coding creates a bus factor of zero

mindflash.org

175 points by AntwaneB 7 hours ago


p1necone - 6 hours ago

If you're using llms to shit out large swathes of unreviewed code you're doing it wrong and your project is indeed doomed to become unmaintainable the minute it goes down a wrong path architecturally, or you get a bug with complex causes or whatever.

Where llms excel is in situations like:

* I have <special snowflake pile of existing data structures> that I want to apply <well known algorithm> to - bam, half a days work done in 2 minutes.

* I want to set up test data and the bones of unit tests for <complicated thing with lots of dependencies> - bam, half a days work done in 2 minutes (note I said to use the llms for a starting point - don't generate your actual test cases with it, at least not without very careful review - I've seen a lot of really dumb ai generated unit tests).

* I want a visual web editor for <special snowflake pile of existing data structures> that saves to an sqlite db and has a separate backend api, bam 3 days work done in 2 minutes.

* I want to apply some repetitive change across a large codebase that's just too complicated for a clever regex, bam work you literally would have never bothered to do before done in 2 minutes.

You don't need to solve hard problems to massively increase your productivity with llms, you just need to shave yaks. Even when it's not a time save, it still lets you focus mental effort on interesting problems rather than burning out on endless chores.

johnfn - 6 hours ago

The article throws out a lot of potential issues with AI generated code, but doesn't stop for a moment to consider if solutions currently exist or might exist to these problems.

- Before LLMs, provided that your team did some of their due diligence, you could always expect to have some help when tackling new code-bases.

Has the author never worked on legacy code before?

- (oh, and it has forgotten everything about the initial writing process by then).

Does the author not think this can ever be solved?

- Because of the situation of a Bus Factor of zero that it creates, vibe coding is fundamentally flawed. That is, only until there is an AI that can generate 100% accurate code 100% of the time, and it is fed 100% accurate prompts.

Why does it require 100% accuracy 100% of the time? Humans are not 100% accurate 100% of the time and we seem to trust them with our code.

pratikshelar871 - 4 hours ago

I recently joined a team with a very messy codebase. The devs were long gone, and even the ones maintaining it didn’t really understand large parts of the code. The bus factor was effectively zero.

What surprised me was how useful AI was. It helped me not only understand the code but also infer the probable intent behind it, which made debugging much faster. I started generating documentation directly from the code itself.

For me, this was a big win. Code is the source of truth. Developer documentation and even shared knowledge are often full of bias, selective memory, or the “Chinese whispers” problem where the story shifts every time it’s retold and never documented. Code doesn’t lie, it just needs interpretation. Using AI to cut through the noise and let the code explain itself felt like a net positive.

paulmooreparks - 22 minutes ago

I find myself acting as a brutal code reviewer more than a collaborator when I lean too heavily on an agent. I literally just typed this into the agent's chat pane (GPT-5, in this case), after finding some less-than-optimal code for examining and importing REST API documentation.

> Testing string prefixes or file extensions is bound to fail at some point on some edge case. I'd like to see more robust discovery of formats than this. This reeks of script-kiddie code, not professional-quality code.

It's true more often than I'd like that the quality of code I see generated is script-kiddie level. If I prompt carefully beforehand or review harshly after, it generally improves, but I have to keep my guard up.

TuringNYC - 4 hours ago

The Bus Factor was an issue long before LLM-generated code. Very few companies structure work to allow a pool of >1 individuals to understand/contribute to it. What I found is -- when companies are well structured with multiple smart individuals per area, the output expectation just ends up creeping up until again there is too much to really know. You can only get away from this with really good engineering management that specifically tries to move people around the codebase and trade-off speed in the process. I have tried to do this, but sometimes pressure from the stakeholders for speed is just too great to do it perfectly.

Shameful plug, i've been writing a book on this with my retrospective as a CTO building like this. I just updated it so you can choose your price (even $0) to make this a less shameful plug on HN: https://ctoretrospective.gumroad.com/l/own-your-system

I dont think anyone has the perfect answer, yet, but LLM-built systems arent that different from having the system built by 10 diff people on eLance/Upwork/Fiverr...so the principles are the same.

spankalee - 5 hours ago

There is a really important point here, and it's critical to be aware of it, but we're really just at the beginning of these tools and workflows and these issues can be solved, IMO, possibly better than with humans.

I've been trying to use LLMs to help code more to mixed success, honestly, but it's clear that they're very good at some things, and pretty bad at others. One of the things they good at obviously is producing lots of text, two important others are that they can be very persistent and thorough.

Producing a lot of code can be a liability, but an LLM won't get annoyed at you if you ask it for thorough comments and updates to docs, READMEs, and ADRs. It'll "happily" document what it just did and "why" - to the degree of accuracy that they're able, of course.

So it's conceivable to me at least, that with the right guidance and structure an LLM-generated codebase might be easier to come into cold years later, for both humans and future LLMs, because it could have excellent documentation.

siliconc0w - 5 hours ago

The problem is that our brains really don't like expending calories on information we don't repeatedly use so the further you get from something, the less you understand it or remember it.

So even if you aren't even vibe coding and are trying to review every change, your skills are atrophying. We see this all the time as engineers enter management, they become super competent at the new skills the role requires but quickly become pretty useless at solving technical problems.

It's similar to why it's so hard to go from level2 to level5 in driving automation. We're not really designed to be partially involved in a process - we quickly loose attention, become lazy, and blindly trust the machine. Which is maybe fine if the machine is 100% reliable but we know that isn't the case.

juancn - 6 hours ago

I think the article underestimates how much intent can be grasped from code alone. Even without comments.

Humans (and I strongly suspect LLMs, since they're statistical synthesis of human production) are fairly predictable.

We tend to tackle the same problems the same way. So how something is solved, tells you a lot about why, who and when it was solved.

Still, it's a valid point that much of the knowledge is now obscured, but that could be said too of a high employee churn organization.

s1mplicissimus - 5 hours ago

I agree with the premise and the conclusion, but over almost 20 years of writing, adapting and delivering software I've more than once been in exactly the same situation. Noone to ask, the only person even vaguely familiar with software development left half a year ago. Half of the processes have changed since the software was written, and the people who owned them have left, too. So while I agree that LLMs will accelerate this process, in my opinion it's not a new flavor, just more of an existing problem. Glad to see this kind of thinking though.

aeternum - 2 hours ago

Perhaps bus factor zero doesn't matter.

A good dev can dive into a completely unknown codebase with or without tools like a debugger and figure it out. AI makes this far easier.

Some great devs/reverse-engineering experts can do the same without even the compiled source code. Again AI tools can now do this faster than any human.

Security researchers have figured out the intricacies of a system with no more than a single string as input and an error code as output.

foxfired - 4 hours ago

I used a similar metaphor in the past referencing "They Machine Stops" [0] by E.M. Forster. Yes, in the near future, we will still be able to read code and figure out what it does. I work on legacy code all the time.

But in the long term, when experienced developers actually feel comfortable letting LLMs write large swats of code, or when the machine no longer needs to generate human readable code, then we will start forgetting how it works.

[0]: https://news.ycombinator.com/item?id=43909111

d_watt - 6 hours ago

It's potentially the opposite. If you instrument a codebase with documentation and configuration for AI agents to work well in it, then in a year, that agent will be able to do that same work just as well (or better with model progress) at adding new features.

This assumes your adding documentation, tests, instructions, and other scaffolding along the way, of course.

schlowmo - 6 hours ago

At least we also have LLMs to generate our status updates during outages of our SaaS products while groping around in the dark.

0xWTF - 2 hours ago

This concept of deploying unreviewed vibe code strikes me as very similar to using a fallen log as a bridge to cross a ravine. Yes, it works, until the day it doesn't. And that day is likely to be much sooner than if it had been a concrete-reinforced steel design signed out by a PE.

shusaku - 4 hours ago

I’ve got a new project I’ve been handling with Claude code. Up until now I’ve always pair coded with AIs, so I would know (and usually tweak) every bit of code generated. Now with the agent, it’s easy to miss what’s being made.

Ive been trying to resolve this with things like “make a notebook that walks through this modules functions”, etc, to make it easier for me to review.

In the spirit of literate riding though, why but have these agents spend more time (tokens…money) walking you through what they made.

Likewise, if dev A vibe codes something and leaves it to dev B to maintain, we should think about what AI workflows can get B up to speed fast. “Give me a tour of the code”

proxygeek - an hour ago

It's fascinating ...didn't think about the Bus Factor at all wrt vibe coding. Feels obvious in retrospect. But I feel there's the other side of software beyond the maintanable, professional-grade software requirements. There are a lot of use cases for basic software to solve that one problem in that one specific way and get it over with. A bit like customized software with little scope and little expectation of long-term support. Vibe-coding excels there.

In a way, I have been thinking about it [1] as the difference between writing a book and a writing a blog post - the production qualities expected in both are wildly different. And that's expected, almost as a feature!

I think as “writing” and distributing new software keeps getting easier - as easy as writing a new blog post - the way we consume software is going to change.

[1]: https://world.hey.com/akumar/software-s-blog-era-2812c56c

- 3 hours ago
[deleted]
interstice - 4 hours ago

> The only thing you can rely on is on your ability to decipher what a highly imperfect system generated, and maybe ask explanations to that same imperfect system about your code its code (oh, and it has forgotten everything about the initial writing process by then).

This just sounds like every other story I hear about working on ossified code bases as it is. At least AI can ingest large amounts of code quickly, even if as of today it can't be trusted to actually decipher it all.

krrishd - 5 hours ago

Related premise / HN discussion from a bit ago - “AI code is legacy code from day one”:

https://news.ycombinator.com/item?id=43888225

- 5 hours ago
[deleted]
abullinan - 3 hours ago

I think it is negative: it actually drains knowledge. It is an anti knowledge field because experts won’t be hired if they can be vibed. This sucks all the brains out of the room. Hence less than zero.

appease7727 - 4 hours ago

LLMs aren't bad for programming in general.

LLMs are bad for bad programmers. LLMs will make a bad programmer worse and make a layperson think they're a prodigy.

Meanwhile, the truly skilled programmers are using LLMs to great success. You can get a huge amount of value and productivity from an LLM if and only if you have the skill to do it yourself in the first place.

LLMs are not a tool that magically makes anyone a good programmer. Expecting that to be the case is exactly why they don't work for you. You must already be a good programmer to use these tools effectively.

I have no idea what this will do to the rising generation of programmers and engineers. Frankly I'm terrified for them.

Rickasaurus - 4 hours ago

The flaw in this reasoning is AI can also help you understand code much more quickly than we could before. We are now in fractional bus factor territory.

ants_everywhere - 5 hours ago

LLMs make it vastly easier to work on unfamiliar code bases

ardillamorris - 5 hours ago

We’re already at bus factor of close to zero for most bank code written in cobol lol

jongjong - an hour ago

The project foundation is everything. LLMs are sensitive to over-engineering. The LLM doesn't have an opinion about good code vs bad code.

If you show it bad code and ask it to add features on top, it will produce more bad code... It might work (kind of) but more likely to be buggy and have security holes. When the context you give to the LLM includes unnecessary complexity, it will assume that you want unnecessary complexity and it will generate more of it for you.

I tried Claude Code with both a bad codebase and a good codebase; the difference is stark. The first thing I notice is that, with the good code base without unnecessary complexity, it generates a lot LESS code for any given feature/prompt. It's really easy to review its output and it's very reliable.

With a bad, overengineered codebase, Claude Code will produce complex code that's hard to review... Even for similar size features. Also it will often get it wrong and the code won't work. Many times it adds code which does literally nothing at all. It says "I changed this so that ..., this should resolve the issue ..." - But then I test and the issue is still there.

Some bad coders may be tempted to keep asking Claude to do more to fix the issue and Claude keeps adding more mess on top. It becomes a giant buggy hack and eventually you have to ask it to rewrite a whole bunch of stuff because it becomes way too complicated and even Claude can't understand its own code... That's how you get to bus factor of 0. Claude will happily keep churning out code even if it doesn't know what it's doing. It will never tell you that your code is unmaintainable and unextendable. Show it the worst codebase in the world and it will adapt itself to become the worst coder in the world.

fzeroracer - 6 hours ago

Unfortunately the corporate machine has been converging on a bus factor of 0. I've been part of multiple teams now where I was the only one holding knowledge over critical subsystems and whenever I attempted to train people on it, it was futile. Mainly because they would get laid off doing 'cost-savings measures'.

There were times where I was close to getting fed up and just quitting during some of the high profile ops I had to deal with which would've left the entire system inoperable for an extended period of time. And frankly from talking to a lot of other engineers, it sounds like a lot of companies operate in this manner.

I fully expect a lot of these issues to come home to roost as AI compounds loss of institutional knowledge and leads to rapid system decay.

danjl - 5 hours ago

The folks that think they can now suddenly program without any experience, and not need understand how their product works, are suffering from Dunning-Kruger syndrome. Actually, it is a much broader segment and includes product managers, executives, VCs and the general public.

- 5 hours ago
[deleted]
oompydoompy74 - 6 hours ago

By that same logic, if a project is documented so thoroughly that an agent could handle all the work, then the bus factor effectively becomes infinite.

zzzeek - 5 hours ago

just not even ten years ago the discussion here was all about "software engineering" trying to be more legitimized as a formal engineering practice, if there should be licensing, if there should be certifications, lots of threads about formal methods to prove algorithms work, and look where we are now. Arguing if humans should even care look at the code we are producing and shipping. crazy shit man

throwaway984393 - 2 hours ago

[dead]

aaron695 - 6 hours ago

[dead]

getnormality - 5 hours ago

This guy thinks bus factors of zero started with ChatGPT. Hahahahahaha. Adorable.

How many of you have asked about a process and been told that nobody knows how it works because the person who developed it left the company?

There was a blog post at the top of HN about this, like, yesterday.

I hate the current AI hype hothouse and everything it seems to be doing to the industry... but I couldn't help but laugh.

The post is great. Bus factor of zero is a great coinage.

jonny_eh - 6 hours ago

Everything you interact with on a daily basis is either natural, or designed by a human. Until now.