I don't think AI will make your processes go faster

frederickvanbrabant.com

425 points by TheEdonian 9 hours ago


angarg12 - 5 hours ago

> This exact thing is what software developers have been begging for since the beginning of the profession: Receiving a detailed outline of the problem and what the end result should look like.

> This is often the part that slows down software development. Trying to figure out what a vague, title only, feature request actually means.

But that is exactly what Software Engineering is!. It's 2026 and the notion that you can get detailed enough requirements and specifications that you can one-shot a perfect solution needs to die.

In my experience AI has made us able to iterate on features or ideas much faster. Now most of the friction comes from alignment and coordination with other teams. My take is that to accelerate processes we should reduce coordination overhead and empower individuals and teams to make decisions and execute on them.

phyzix5761 - 8 hours ago

I think when LLMs first came out people thought they could just say something like, "Make a Facebook clone". But now we're realizing we need to be more exact with our requirements and define things better. That has always been the bottle neck in software.

When I was working we used to get requirements that literally said things like, "Get data and give it to the user". No definition of what data is, where its stored, or in what format to return it. We would then spend a significant amount of time with the product person trying to figure out what they really wanted.

In order to get good results with LLMs we need to do something similar. Vague requirements get vague results.

kj4211cash - 8 hours ago

On the one hand, this is a clean post that explains exactly what a lot of us have been thinking and seeing on the job at large organizations doing tech work. Dear Author, I agree with you 110% and want everybody else to come to understand what you have written.

On the other hand, it feels like we've been over this tens of times recently, on HN specifically and IRL at work. Another blog post isn't going to convince leaders that this is how the world works when they are socially and financially incentivized to pretend like AI really will speed things up. So now I just wait for their AI projects to fail or go as slowly as previous projects and hope they learn something.

ddosmax556 - 5 hours ago

This article assumes that AI only has an impact on the development phase which is certainly not true. It can speed up every part of the step. Including ideation, legal, documentation, development, and deployment.

Ideation: Throw ideas back & forth, cross reference with knowledge bases, generate design documents. Documentation: Generate large parts of docs. Development: Clear. Deployment: Generate deployment manifests, tooling around testing, knowledge around cloud platforms.

Every single step can be done better & faster with AI. Not all of them, but a lot.

Even development. Yes some part of your job involves understanding the problem better than anyone & making solutions. But some parts are also purely chore. If you know you keed a button doing X, then designing that button, placing it, figuring out edge cases with hover & press states, connecting to the backend etc - this is chore that can be skipped. Same principle applies to almost all steps.

siliconc0w - 3 hours ago

People don't really understand that non-trivial software development isn't even 50% coding. The coding step is generally the 'easiest' part and given to Junior developers. In a large org most product changes span multiple systems and human operations. Seniors and even mid-level generally spend most of their figuring out how to shape the local priorities into a new arrangement of the existing cybernetic entity and then getting buy-in on that new vision given these other teams have their own priorities.

This naturally involves a lot of tradeoffs and politics - senior engineers know to avoid adding 'weight' to their airframes and fight hard to avoid adding scope to the systems they're responsible for or divergence from their intended direction of travel. So compromises have to be struck or escalations to management to choose between priorities have to play out.

Maybe AI solves that as well but that is a lot more difficult lift.

somenameforme - 7 hours ago

I think there's an interesting dichotomy. I find that for things I'm already capable at, LLMs are relatively inconsequential. But for things I'm no good at, it's a huge game changer. For a large company, that's going to be able to hire out most needed roles for any given project, this means the overall effect is going to be relatively inconsequential. At best, they may be able to cut down on labor costs by having one guy do a mediocre job at 5 people's jobs in exchange for a worse product. Short-term gains for long-term costs, wcgw?

But for a small studio, or independent developer, LLMs are a big game changer. Being able to do a mediocre job at 5 people's jobs is a huge leap over trying to get by without those jobs - relying on third party assets or other sorts of content, or even worse - doing a really awful job of trying to improv those jobs. See the UI of basically any program ever that was clearly laid out by a programmer and not a designer. Or there's the whole trying to rip off stuff from dribbble, but lacking the skills to do so. Whereas with AI, you can suddenly competently rip off everything and everybody - it's basically their entire MO.

shalmanese - 8 hours ago

This is all substantially correct and gives us hints as to where to focus for AI to make the processes go faster.

Eg: I had a product manager say to me that he envisions a future where any meeting with stakeholders that does not result in an interactive prototype by the end of the meeting would be considered a failure. This feels directionally correct to me.

The other thing I expect to see is Vibecoding being the "Excel 2.0" where it allows significant self-serve of building interactive apps that's engaged in a continual war with IT to turn them into something with better security guarantees, proper access control & logging, scalability, change management etc.

But the larger historical point here is that every revolutionary transition produces, in the early stages, "Steam Horses". The invention of the steam engine had people imagining that the future of transportation would involve horse shaped objects, powered by steam, pulling along conventional carts. It wasn't until later developments that we understood the function of transportation as divorced from the form.

I started talking about Steam Horses originally in the context of MOOCs, which was a classic Steam Horse idea.

giancarlostoro - an hour ago

AI is not supposed to bypass the process, but it can speed up things nonetheless, it can help with refactoring, writing boilerplate, finding errors you never even spotted before, and things that linters cannot catch.

I see so many comments that seem to me like either they don't use standard known processes, or they assume AI doesn't need you to follow the standards.

Can I ship more code and features? Absolutely I can, if I have a good set of requirements, and thorough testing. All AI written code needs to be reviewed and tested, and should be in discrete commits and pull requests, anyone pushing a PR with thousands of lines of code is a red flag, you wouldn't do it without AI, why would you do it with AI? Major rewrites / refactors are the only known exception, and even then I would argue that these should still have discrete commits you can switch to so you can see how things changed, and make a more informed decision.

If you show me a massive one shot commit or PR I will deny it. Break it down into bits a normal developer can audit.

ryanmcbride - 2 hours ago

Been having conversations like this with a client I've worked with. They get approved by corporate for us to use claude and ask how much faster we'll be able to move with it.

I tell them "Us engineers will probably be able to deliver some of our stuff faster but it won't have even a slight effect on the actual deliverable because we've never been the bottle neck", it's the fact that the process to get an S3 bucket allocated takes (not exaggerating) 4 weeks there.

p2detar - 8 hours ago

> Yes, AI can generate code quickly (whether that’s a good thing is open for debate), but that doesn’t mean it’s generating the correct code.

No, the code is actually almost always correct. The way it’s added is probably not what you’re going to like, if you know your code base well enough. You know there’s some ceremony about where things are added, how they are named, how much comments you’d like to add and where exactly. Stuff like that seems to irritate people like me when not being done right by the agent, and it seems to fail even if it’s in the AGENTS.md.

> If you were to give human developers the same amount of feature/scope documentation you would also see your productivity skyrocket.

Almost 2 decades in IT and I absolutely do not believe this can ever happen. And if it does, it’s so rare, it’s not even worth talking about it.

gwbas1c - 2 hours ago

I've found that AI is extremely useful when coding: For example, a task that used to take 3 days I can now do in about a day, in part because I can do things like have the agent write tests, or because I can have the agent start from some higher-level instructions which I can then clean up and debug.

BUT: The article is 100% right that I spend a lot of time doing other tasks: Reviewing other teammates' work, interacting with colleagues, planning, ect. AI isn't quite as helpful there. For example, I find that co-pilot code reviews don't add a lot of value; and the AI isn't good at judging a UI.

Maybe we'll get there soon? It's starting to look like the biggest challenge with AI is learning how to use it correctly.

pvtmert - 2 hours ago

Absolutely lovely article.

> Software development is about translating a problem into a solution that a computer can understand and automatically resolve. Preferably in a secure and scalable way.

True, meanwhile software engineering puts optional bit into the requirements bucket. (ie. Secure & Scalable)

---

For the problem description and gathering requirements sentiment; I don't think we'll _ever_ have a 100% proper way of doing this. If we did, we'd basically solve any and all problems in the world.

Nevertheless, I think AI can help with investigating and exploring the problem space. Especially when the problem is an already solved thing that the prompter hasn't gained enough expertise yet.

Moreover, I think (and keep mentioning) we will see different kind of models in the near future. Those would be more specialized per industry, per language (both programming and human languages), even per field.

Those will open up newer areas for employment & job market. Something like an "AI-trainer" but more of a knowledge-worker style. Although this can also be automated with LLMs, the limits on context length/size plus amount of compute required to re-train the models to iterate faster both are quite heavy.

usernametaken29 - 8 hours ago

Instead of mandatory AI workshops simply cancel all meetings with more than 3 people and no written agenda. Instead block the meeting time for productive work. That’ll be 2000$ of advisory fees for the insane productivity gains I just unlocked you. You’re welcome

sillysaurusx - 8 hours ago

I actually have data on this. I’ve been building sharc, a Common Lisp port of Hacker News. https://www.github.com/shawwn/sharc

If that sounds familiar, it’s because it’s what dang did over the course of several years.

It’s taken a few weeks. I started right around May, and now it’s able to render large HN threads (900+ comments) within a factor of five of production HN performance. (Thank you to dang for giving actual performance numbers to compare against.)

A couple days ago, mostly out of curiosity, I ran Claude with “/goal make this as fast as HN.” Somewhat surprisingly, it got the job done within a couple hours. I kept the experiment on separate branches, because the code is a mess, just like all AI generated code starts as. But the remarkable part is that it worked, and I can technically claim to have recreated HN within a few weeks.

The real work is in the specifications. My port of HN is missing around a hundred features. Things from favorited comments, to hiding threads, to being able to unvote and re-vote.

But catching up to HN is clearly a matter of effort (time spent actually working on the problem with Claude), not complexity. Each feature in isolation is relatively easy. Getting them all done within a short time span without ruining the codebase is the hard part. And I think that’s where a lot of people get tripped up: you can do a lot, but you have to manage it tightly, or else the codebase explodes into an unreadable mess.

It’s true that if you don’t do that crucial step of “manage the results”, you’ll end up making more work for yourself in the long run, by a large factor. But it’s also true that AI sped me up so much that I was able to do in weeks what would’ve otherwise taken years (and did take dang years). I’m not claiming parity, just that I got close enough to be an interesting comparison point.

AI can clearly accelerate us. But we need to be disciplined in how we use it, just like any other new tool. That doesn’t change the fact that it does work, and I think people might be underestimating how good the results can be.

ivansmf - 6 hours ago

The article severely underestimates deployment times for large, world wide services. Usually the strategy is to have a smaller "blast radius" for deployments and going in stages that are also usually time bound ("let it bake"). It also does not account for outages and fixing things you only find in deployment. Programming languages like Python it using injection in Java (e.g. using Guice) either need pristine testing, and all test teams were converted to dev 20 years ago, or have a magical way to destroy all the help compilers and static analysis can give you. So yeah, you take the 4 weeks of development from your 6 month deployment, then add 6 weeks of debugging and retries by using AI. You're welcome that will be 3 million tokens, of which you wrote 1k, the rest was system prompts and "reasoning", which you do not control. This whole AI space is highly fixable, but requires investment no one seems to be willing to do, particularly in areas that were mistakes from the past.

boron1006 - 5 hours ago

At least where I am we can’t and shouldn’t know all the requirements of a project beforehand^. Every project is an iterative learning process between the users, product and engineers. The problem is if everyone uses AI to replace their thinking it breaks that process and no one learns anything.

^ I say shouldn’t because I work in research engineering. Most of the needs of our users are pretty unique. We’ve had people come in and try and specify every piece of work, -and ended up building a crud app no one wanted or used.

adam_patarino - 8 hours ago

> Every software developer knows that you can’t make projects go faster just by typing faster. If that were the case we would all be taking typing lessons.

So well said.

AI is unveiling how the bureaucracy is the slow part.

brkn - 5 hours ago

This post makes it sound like an engineers role is only the collection and filling of feature gaps, but leaves completely out that an engineer is also responsible for the feasibility of a feature. If you get a request for a feature, but you are aware of the current system's limitations, it is your job do come up with a solution which fits into the business sides given frame. But nowadays engineers have been so much drilled that showing resistance to management is portrayed as a lack of skill and not a lack of trust from management into their staff. And when it is clear that your management actually doesn't clear it just tells you how much of the self proclaimed mission is the real motivation behind these people. If the acceptance criteria of management does not meet your principles you might not be the right fit and if, in my opinion, the ac of management are mostly based on the next promise made to investors or by sales to prospects, their goal is to make money and not to develop a quality product.

Yokohiii - 8 hours ago

Delivering more complete details for a task at hand is a noble goal, but there is a problem.

Programming is a logical circuit breaker. There is a wide range of incompleteness that halts development or puts the solutions in an unpublishable state.

A product person has no compiler, no RAM, no database, no state machine. There is nothing that can fail. There are probably strategies to weed out some issues, but none will be perfect.

We need to combine reality with computers. Computers set the constraints and we can only check if we are in bounds of the constraints by solving the problems with computers.

Oddly enough AI has so far nothing to offer to improve the "product people" problems.

jldugger - 6 hours ago

Fascinating, I was literally thinking about how to communicate this to coworkers the other day, literally down to the gantt chart. Now I don't even have to make one =)

> We are now talking about software development, but this is applicable to all processes that take longer than you would like.

Indeed, it's kind of a generalized version of Amdahl's law. Since we only speed up a portion of the work, there are upper bounds on time saved. Worse, work in progress tends to bunch up at a specific point: code review. A coworker of mine literally complained two months ago now that nobody was reviewing code (and that it was blocking his work). I'm not sure review delay has actually gotten better since.

neversupervised - 6 hours ago

It’s completely wild to me that lifelong programmers come into contact with agentic coding and come to the conclusion that their jobs are safe for one reason or another. AI will definitely be able to write entire software, inclusive of figuring out requirements and asking the right questions. It’s not that far already. Why is it that everyone looks at weaknesses of a technology that didn’t exist a couple years ago instead of appreciating the incredible rate of improvement? I know why, because it’s inconvenient to the narrative of what makes us valuable. But still, our job is to turn ideas into a sequence of logical steps. Why can’t we do the same when forecasting the impact of AI on our jobs?

netcan - 5 hours ago

>What people typically don’t do is look at why this is taking so long, and even more importantly: long duration does not automatically mean the problem originates there.

To some extent, we tell as many lies as we can get away with. Some answers are more convenient then others.

"Why" this is taking so long, like "why did this fail?" are prone to broadly agreed lies. Sometimes this is for obvious blame liability reasons. Often, this is because the lie conflicts with some "meta."

One such fallacy is the idea that software=value. Code= money, because it cost money to write. Features=revenue. Etc.

Irl.. startups produce features very quickly because they actually need features. They start with zero features.

But... LinkedIn, visa or even Facebook.... What they are short on is opportunities to develop code with value. Ie... Something that will increase revenue.

FB aren't resource constrained. They're demand constrained. If there were a "write code, make revenue" opportunity available... they'd have taken it already.

This totally conflicts with the experience of working somewhere. That's because you have wishlists, road maps and deadlines.... and it always appears that demand for code is sky high.

eddy-sekorti - 8 hours ago

Yes, it is true for large enterprises, but not for startups ans individual creators. AI is accelerating speed for anyone who is not stuck in Corporate breaucratic processes.

mactavish88 - 4 hours ago

I don't think we're going to be able to have rational conversations about this with C-level folks for quite some time. They mostly seem too wrapped up in copying each other to think clearly, and it's only when the bottom line starts suffering that we might be able to start asking some questions about their strategy.

vips7L - 37 minutes ago

It always comes back to Amdahl’s law.

sonnyproto - an hour ago

I really love AI to be honest. I feel like I'm using it to achieve much more than I could ever dream of. It changed my life!

bob1029 - 5 hours ago

I think it will.

The primary issue is simply that developers are the most immediately impacted by this technology. The combination of being able to adopt, willing to adopt, and the tech actually being incredibly good at developer related concerns is unique. The rest of the business will eventually catch up. I'm watching it happen in real time. It is agonizingly slow in most places, but it is happening.

The developers being able to drain a one year long work queue in an afternoon is meaningless if the rest of the business cannot absorb the effects of that work in the same timeframe. The business will not leave your idle work queue on the table for long though. Keep pulling a vacuum on them and they will fill the space eventually.

chilmers - 8 hours ago

It’s amazing to see some people talk with 100% confidence about the macro view of AI assisted development when we have had strong coding agents available for less than a year.

elktown - 7 hours ago

People are far too charitable about an industry with chronic short-term thinking. We'll just lower the standards to whatever fits the success story.

pu_pe - 8 hours ago

Some organizations added a ton of process around software development because it is expensive and risky. They require a ton of approvals and sign-offs, then some managing overhead on top to check if their investment is on the right track. This approval process is bound to change by the fact that development is far cheaper and faster now.

Another aspect that is not captured here is that the lawyers and subject matter experts will also be using AI to speed up their parts.

runtime_terror - 4 hours ago

I think the thing that gives human developers a leg up is the ability to read between the lines of a spec and have the ability to intuit the expected output more than an LLM in many cases.

The human their cumulative experience over a career of the nuances behind every decision and their evolved context at their given company. This context allows them to take that one-line spec and extract tons of detail from it by knowing who wrote the ticket, what was the "trigger" for the ticket, what other work is being done in tandem that might need to be incorporated, etc.

LLMs can be given this context but it's a manual process of transcription into its prompt/memory/skills and that content must be continually updated and refined. It just pushes lots of work to spec writing from the more intuitive nature of feature development a lot of us have a level of mastery over. Then you must constantly have a back-and-forth to refine the output.

Any senior engineer knows that a lot of that communication is wasted energy. If I have a good idea of what I'm building I can develop the feature in a focused flow of output that I refine in an almost unconscious way because I don't need to translate intent into words, just code, and that process is incredibly automatic after years of developing software.

When all the effort is placed into writing specs, re-prompting and then reviewing (often over and over again), that intuitive and automatic ability to build software degrades. Think of a time when you were mostly focused on PR reviews and not contributing to a project. You may have been able to help developers build better code, but if you were to jump into that project to contribute, there would be a real and painful effort to re-familiarize yourself and reconstruct that intuitive familiarity of the project.

LLMs have many very useful qualities but so far I fear an over reliance on them can be more a hinderance than a benefit.

lmeyerov - 6 hours ago

It's felt awhile similar to what we see in parallel computing:

- shift towards throughput-oriented vs latency-oriented. Can juggle more tasks, but increasingly hard to speed up individual ones.

- strong scaling is tough. Might even see slowdowns for individual tasks, so reliable benefits come from being able to juggle more and eat the per-task inefficiency

- amdahl's law: we can't speed up tasks beyond their longest sequential (human) unit, so our work becomes identifying those bits and working on them. Related: you can buy bandwidth, but you can't buy latency

delichon - 8 hours ago

The promise of AI is in doing things at all that couldn't be automated before, at least economically. And when you find a use case where a bit of automated inference is sufficient and can replace human inference, it can wildly speed up a process, from when Susan has time for it, to right now.

hydra-f - 7 hours ago

Handholding is an issue which is affected by 3 factors: the model, the tooling and the human expertise. Out of the three, the last is the weakest link, due to the fact that it takes the longest to nurture.

Once tooling (e.g. agent harnesses, external tools) becomes more mature and consistent, the other 2 will become less of a bottleneck.

If I were to take a gamble here, I would argue that development will at one point reach the more ideal scenario, whereas the project planning, the scoping, will become longer. Also, the documentation section will take almost the same as the development, slightly longer at the edges.

The new ai-assisted era will most likely push companies to adopt a Waterfall management, rather than an Agile one.

hibikir - 7 hours ago

Every large corporation is stuck in communication problems and approval processes. They have grown so large as to have minimal alignment between what the company attempts to produce, what makes the company profitable, and what people actually do. Enshittification, The Gervais Principle, Bullshit Jobs. Pick your favorite, flawed way to look at what is going on, it's all blind people touching different parts of the same elephant.

The way AI makes your processes go faster will have little to do with cutting software development time in itself, but by letting an organization be made with fewer people, which in itself lowers your misalignment issues. A giant company of 200K people will still be about as messy as one today, but you might be able to do a lot more with the same number of people, just like a lone programmer today, without AI, already does quite a bit more than anyone could do by themselves the 80s.

Maybe some of the advantages are that you don't need quite as many developers, or maybe you can use a smaller marketing team, or you don't need to spend that much time answering questions, because an LLM is doing it for you, and it's tracking what it's been asked of it, turning the questions into product research. Either way, the gains come from being able to run leaner, and therefore minimizing organizational misalignment.

q8zd3 - 4 hours ago

Honest question: Does anyone know about any quantitative study or analysis on productivity gains using code assistants? Asking for numbers comparing between the "pre AI era" and now.

Also, I have the impression that LLMs bring some gains or benefits for individuals but not relevant enough at the organization level.

bicepjai - 6 hours ago

Recent NYT podcast showcased how China and the US are putting time, effort, and money into using AI. I have to say I liked China’s approach of AI percolation into the economy than US approach of walled gardens with cloud.

https://podcasts.apple.com/us/podcast/the-daily/id1200361736...

dsr_ - 4 hours ago

Insofar as I have seen anyone get actual productivity boost from AI, the process went like this:

We have a person who wants, effectively, a formatted report generated on demand from four sources. The current interface is four different programs, all of which were written by different groups inside the corp, but they also all draw from the same or similar databases. There's a unified login, but each interface has its own permissions.

The company brings in an AI initiative and soon enough drops all security restrictions for the AI's access to the databases. The new formatted report gets generated through the use of a few tens of thousands of tokens each time, and about 5% of the time synthesizes non-existent data.

A competent DBA and application programmer could have spent a week doing the same thing, producing a program which would do the job faster, cheaper (at run-time), secure and in a way which could be extended and debugged.

But DBA and application programmer time is expensive up-front and the execs are gung-ho about the stock-price now that they are hip and trendy.

slashdave - 5 hours ago

> Every software developer knows that you can’t make projects go faster just by typing faster.

You know, typing fast and accurately is kind of important.

The new speed skill that developers now need is speed reading. LLMs just make copious amounts of output (from tests, documentation, diagnostics). They also produce code so quickly that a skill for focusing on weak points is so important.

king_geedorah - 8 hours ago

> If you were to give human developers the same amount of feature/scope documentation you would also see your productivity skyrocket.

This is how I felt when I first started seeing people discuss things like AGENTS.md etc.

shay_ker - 6 hours ago

> "requirements were always the bottleneck"

> "faster typing won't make you faster".....

I understand a Deloitte consultant has specific incentives. But let's first try to answer a baseline question: why do some companies have thousands of software engineers? What do they all do?

And then, a follow-up: what is actually the bottleneck at most companies? What causes "requirements gathering" to take long?

r2ob - 8 hours ago

Large corporations with orthodox methodologies will take time to extract the best benefits from AI. Small teams, which still remember the original Agile Manifesto, will soar and overtake their competitors.

whatever1 - 5 hours ago

It definitely made the process of testing features with the users 10x faster. You can iterate, test and throw away bad ideas much much faster.

The proper implementation and design still take time, but still faster in systems with a lot of available resources online.

praneetbrar - 8 hours ago

If the underlying workflow is noisy, ambiguous, or overloaded with coordination overhead, faster generation just produces more low-context output to review and reconcile.

sajithdilshan - 7 hours ago

This is so true. Recently, I’ve been working on a project involving almost every department, including Product, Engineering, Compliance, Finance, etc.. We kicked things off late last year with a many meetings. Product was primarily coordinating between the teams, but engineers also met directly with non engineering departments to explain technical details and accelerate the timeline.

However, while the engineering team successfully fast tracked development, UAT, and production testing largely thanks to AI other departments only began digging deeper into the project toward the end of April. To be fair, they do use AI in their workflows to some extent, but they haven't adapted their processes to keep pace with engineering's increased productivity.

In my opinion, this lag is mostly because many employees in those departments are older and hesitant to change their routines. While I understand that resistance to change is a natural human trait, what comes to my mind is this beautiful German adage, "Wer nicht mit der Zeit geht, geht mit der Zeit" which loosely translates to, "Who doesn't change with time is left behind by time"

CharlieDigital - 8 hours ago

    > ...but that doesn’t mean it’s generating the correct code.
Something I'm observing is that now a lot of the pressure moves to the product team to actually figure out the correct thing to build. Some product teams are simply not used to this and are YOLO-ing prototypes now, iterating, finding out they built and shipped the wrong thing, and then unwinding.

Before, when there was the notion that "building is expensive", product teams would think things through, do user interviews up-front, actually do discovery around the customer + business context + underlying human process being facilitated with software.

This has shortened the cycle to first working prototype, but I'd guess that in the longer scale, it extends the time to final product because more time is wasted shifting the deliverable and experience on the user during this process of discovery versus nailing most of the product experience in big, stable chunks through design.

At the end of the day, there is a hidden cost to fast iterative shifts on the fundamental design of the software intended for humans to use and for which humans are responsible for operation. First is the cost on the end users who have to stop, provide feedback, and then retrain on each cycle. Second is that such compounding complexities in the underlying implementation as product learns requirements and vibe-codes the solution creates a system that becomes very challenging for humans to operationalize and maintain.

Ultimately, I think the bookends of the software development process are being neglected (as author points out) to the detriment of both the end users and the teams that end up supporting the software. I do wonder if we're entering an "Ikea era" of software where we should just treat everything as disposable artifacts instead.

airstrike - 6 hours ago

LLMs are great at two things: search and speed of generating code.

I get most value from them when I'm asking it to either fill in the blanks of something already half implemented or when I need some feature in a given context/language that only exists in other languages

pron - 7 hours ago

There is another problem. For developers, productivity means "functionality produced per hour of work", but that's not what productivity means for businesses. To them, productivity means "money produced per hour of work", and because AI costs money, it is this number that needs to go up (not quite, as it's more "value" than money, but until the economy adjusts they are similar). Even if we could considerably reduce the time between releases and/or do it with fewer people at scale across the industry, for it to pay off, we'll need to see a corresponding rise in demand for software and/or features.

Another option is that lower software costs would significantly reduce the cost of whatever non-software product the software supports (manufactured good, electricity, services, telecom etc.) but I don't know in which industry the cost of software is a large portion of the overall product cost.

And there's another thing. A company that makes tractors can't produce food without land. A company that makes metal machining equipment can't make cars without the raw materials. But a software company that makes software that automatically makes software could just produce the result software itself rather than sell the software-making software. If AI ever reaches the point it makes software at a marginal cost that's not much higher than the cost of the AI itself, what would be the incentive of selling that AI?

lakshjain1705 - 5 hours ago

Great explaination it is true that AI doesn't generates the correct programs everytime but sadly it has become a common practice to involve AI in every aspect of Software Engineering, and it is true that it made Software Engineers to become product manager and their work has become to debug and test the entire codebase which adds more frustation.

Everything is OK, but the size of Gantt chart should be expanded.

spoaceman7777 - 4 hours ago

I'm sure this take will age well.

phyzome - 7 hours ago

Someone I know said "software is made of decisions". <https://siderea.dreamwidth.org/1219758.html> Seems very applicable here.

sunir - 8 hours ago

Our current most popular methods of using AI with software development is either waterfall or autocomplete. We aren't at a great pair programming experience yet. I presume that would improve speed and accuracy, but it's still unclear.

regnull - 6 hours ago

You know, AI could help you to produce better-looking charts.

lysecret - 4 hours ago

Ai in my mind is a new primitive of computing, like compute a db blob storage.

osigurdson - 4 hours ago

It goes faster, at least for a while, if you don't look at the code.

ChicagoDave - 5 hours ago

Another post that doesn’t understand effective use of GenAI in software engineering.

The assumption is that there’s no way to extract speed and accuracy matching business models.

This isn’t obviously false to the majority of dev/arch’s because most are vibe-coding, but it is extremely obvious to the minority that has focused on accuracy first THEN speed.

https://devarch.ai/

reenorap - 5 hours ago

This blog post is nonsensical and the arbitray time boxes aren't realistic. Not all development cycles or features require legal input and I would hazard most don't, even in Big Tech. Documentation takes seconds to generate. Same as tests.

Feature development could take minutes to hours depending on how you iterate it. These days, all we do now is just think of a feature and add it within an hour using AI. We have a process that is a year old now that is fixing bugs that would have taken us hours or days and it spits out a fix in about 10-15 minutes that is 95% accurate. 5% is garbage, but 24 months ago, 95% of it was garbage so the progress is staggering. The longest pole is code review which is all human, but that will all be automated soon.

Not everything will be much faster, but most processes will be 1-3 orders of magnitude faster. To ignore this or find excuses why LLMs/AI won't speed things up or remove the need for large swathes of humans is delusional and cope-ism.

nijave - 6 hours ago

While I agree with the article, I think AI can speed up all steps in the Gantt chart. It's really good about aggregating and summarizing information.

>Process blocked on human inputs

Have AI check chat, email, issue tracker and see who it's blocked on and what latest status is. It may not save a huge amount of time but it can dig through the info pretty quick.

>Exploration

Once again, have it scour issue tracker, chat, customer suggestions, product documentation and summarize history and current status. Much quicker than setting up new meetings to try to rediscover and organize existing info.

Another use case, have agent build prototype, hand to people, have AI summarize and integrate feedback.

Claude or ChatGPT + Slack MCP + Jira MCP + Google Docs MCP + internal knowledgebase MCP + gh (GitHub) CLI + Datadog MCP--really 1 MCP per process in the Gantt chart--has been a huge boost at work just digging through context scattered all over the place and summarizing.

That said, it definitely still needs supervision and hand holding along the way

cmrdporcupine - 7 hours ago

So we have spent 40 years trying to get management and investors to understand that 9 people can't make a baby in one month.

There's no point in falling under the illusion that they'll finally get it now. This will all fall on deaf ears. They're convinced they're automating us out of existence when in fact they'll need the services of people who can surf complex systems more than ever.

We will be able to do more than ever and potentially faster. The issue remains that most of the things these people ask us to do and want us to do and pay us to do remains basically stupid and as TFA points out, the last mile of getting shit properly shipped isn't going to speed up. It's going to slow down.

If you want to see what happens when you put people in charge who sincerely believe in the "AI automates SWEs out of existence" mantra, take a look at the code quality of Claude Code and the recent "bun rewrite in Rust" fiasco.

jorisw - 4 hours ago

I could see two ways out

- People need to be trained to use AI in ways that we don’t call slop, meaning half is made up by the LLM

- To this effect, LLMs should be trained to ask for more input before offering any kind of final output

sonnyproto - an hour ago

To be honest, I think my process go 10x faster with AI. It's literally visible lmao.

himata4113 - 4 hours ago

This is wrong already because it makes the assumption is used only for development.

No. AI is used all the way from the very start to the very end and after.

Havoc - 8 hours ago

It absolutely will make some things faster. Anyone that has ever churned out some boilerplate code with it knows that.

...but yeah most organizational processes & people aren't set up for leveraging it and roll out will be slow (same on learning where it does / doesn't work).

justinhj - 3 hours ago

Whilst the conclusion of the article certainly seems plausible, it glosses over the cost calculations and simplifies them too much.

The cost of a subscription is somewhat offset by being guaranteed income regardless of usage, following the financial models of gyms. Whilst api costs represent both the convenience of on-demand pricing and the scale for applications with many users.

Further, the costs of api and subscriptions need to cover the operating costs of the business, the massive SOTA training costs as well as the costs of inference.

The true cost of serving tokens is buried in all of that in these enormous, opaque companies.

fHr - 7 hours ago

Maybe my existing processes not but it can help you enormously. I literally found a problem with AI analyzing packages in Wireshark and it hinted and steered me in the direction in me finding the error setting in the end. Could a senior network guy found it? Yes but probably not even faster. Did I as a L2 SWE not being familiar with much of networking and the companies stack(was like 1 Month at this company) found it with no AI, absolutely no.

stldev - 7 hours ago

The METR report continues to hold up. I would add "No Silver Bullet" to the reading list.

Careful who you share this information with- better to roll with the kool-aid drinkers when they're holding the cards.

outside1234 - 6 hours ago

Research tells us that only 15% of software engineering is the “writing code” part. It looks like we are rediscovering that.

echelon - 8 hours ago

It makes small teams without organizational overhead go lightning fast.

It might be the ultimate tool of disruption.

johnwheeler - 5 hours ago

What a naïve article. People don’t write software this way anymore. A Gannt chart? We don’t use those anymore.

People have to stop promoting this narrative of the AI doesn’t make you move faster as it’s not helping anybody.

I get it. We all worked hard for our skills and it’s really difficult watching them get automated away, but it’s been this way since the printing press assembly lines and the industrial revolution itself. Things change, and you have to adapt to them and stop thinking about it from a centric point of view. The narrative people should be pushing is that you can build great things with AI.

Of course you might not have a job for a while and yes, that’s a big deal but it doesn’t mean that AI is wrong or stupid. It means you have to adapt.

titaniumrain - 8 hours ago

cars are not faster than horses

simonreiff - 6 hours ago

The bottom line is that AI is genuinely useful at prototyping new features, acting as a sounding board, and generating quick initial drafts, even if the quality isn't uniformly excellent. It seems plausible to conclude that it will only take a little additional effort to refine and improve that initial draft to achieve excellence and truly high-quality, production-grade code. In reality, whole processes to build properly with AI-generated outputs and that mitigate thoroughly against the fundamental limitations and constraints of AI agents (many of which are not well understood even by daily users) really need to be invented and implemented.

I think many things that were true prior to AI are still true or more so today, but new workflows and processes altogether are needed. I suspect that comprehensive, detailed planning and specification documentation must be assembled in advance of beginning code (akin to waterfall) when working with AI agents. Furthermore, I still believe customers and other key stakeholders need to be involved early and often so that the product can iterate towards a better ultimate end state (i.e., agile). Unlike prior to AI, it's completely plausible to implement both types of approaches, and they aren't mutually exclusive. We can do comprehensive, exhaustive, thorough planning and specification documentation prior to handing off to dedicated engineering and products teams, AND we can work quickly and iteratively via sprints that aim for frequent meetings and updates with the stakeholders that matter.

I also think the same validation gates that mattered before -- linting, SASTs, but most importantly, comprehensive automated testing that gets run locally and in CI/CD and is regularly expanded to cover all expectations about the behavior and structure of newly-implemented functionality -- continue to matter now, more than ever.

New tools and processes also must be built to make human review, the single biggest bottleneck in software development today, more simplified and streamlined, and less taxing. I think tools like CodeRabbit and Qodo can help automate and expedite the code-review and approval processes, but they would be even better if they were working off more surgical and tiny edits. Bloated, verbose AI-generated code edits are the core problem here. Process management techniques to mitigate the problem of AI code overload can prohibit the submission of AI-generated PRs, require senior engineer approval of any PRs prior to merging, or block the maximum number of lines or changes made. More sophisticated processes like Graphite's stacking of PRs are genuinely helpful in breaking down massive PRs into smaller chunks.

Finally, precision-editing tools for AI coding assistants like HIC Mouse (full disclosure, my project) that move beyond the existing options available to AI agents of whole-file replacement or exact string-replacement to enable agents at the editing-tool layer to perform surgical, tiny changes that don't touch any unrelated content, giving agents specialized visibility, recovery, and next-step guidance mechanisms that safeguard AI workflows, can materially reduce AI code slop by alleviating burdens upstream of code reviewers, both automated and human.

The bottom line: Shipping secure, production-grade code was never easy and always took a long time. It's not necessarily easier now just because certain aspects to the overall process can be generated much more rapidly. Arguably, the hardest parts like human review and approval are much harder now -- not easier. Solutions will take hard work and must be tested in the crucible of real-world enterprise usage. I am guessing that companies that deploy successful processes will be wildly profitable. Those that don't, including well-established incumbents, will fail. I do think AI absolutely can give organizations a game-changing boost in development velocity of genuinely high-quality code that might even be better than anything ever created previously. I also fully agree with the author that for many organizations, AI will not make their processes go faster and may even slow things down.

Lapsa - 6 hours ago

"In 1975, Dr. Joseph Sharp proved that correct modulation of microwave energy can result in wireless and receiverless transmission of audible speech."

KaiShips - 2 hours ago

[flagged]

hectdev - 7 hours ago

[flagged]

max_flowly_run - 7 hours ago

[flagged]

bhuvisingh - 7 hours ago

[flagged]

rajatpal12 - 7 hours ago

[flagged]

codesentinel - 7 hours ago

[flagged]

aminekhd - 7 hours ago

[dead]

r0l1 - 4 hours ago

[flagged]

dakolli - 5 hours ago

I just spent a few days cleaning up someone's web app they created with Claude Code. There was more than 30k lines of DEAD code, and I was able to cut the code that was actually being used down by ~30-40%. If I just wrote this app myself It would have taken a day or two.

LLMs are not helpful, they make everything worse. They make you worse, or reduce you to average at best. I really just don't see what ya'll are seeing. I have access to every model with no limits, Its not issue of "holding it correctly" I can assure you, I've tried.

Yes it can create very small programs with low complexity, but anything of any size ends up as a literal Eldritch horror or with so many subtle bugs that make life miserable. I actually hate all of you that are pushing it onto people, its such a lie.