I believe there are entire companies right now under AI psychosis

twitter.com

693 points by reasonableklout 4 hours ago


https://xcancel.com/mitchellh/status/2055380239711457578

https://hachyderm.io/@mitchellh/116580433508108130

zmmmmm - 3 hours ago

I think AI rescue consulting is going to be come a significant mode of high value consulting, similar to specialists who come in to try and deal with a security breach or do data recovery.

Purely AI written systems will scale to a point of complexity that no human can ever understand and the defect close rate will taper down and the token burn per defect rate scale up and eventually AI changes will cause on average more defects than they close and the whole system will be unstable. It will become a special kind of process to clean room out such a mess and rebuild it fresh (probably still with AI) after distilling out core design principles to avoid catastrophic breakdown.

Somewhere in the future, the new software engineering will be primarily about principles to avoid this in the first, place but it will take us 20 years to learn them, just like original software eng took a lot longer than expected to reach a stable set of design principles (and people still argue about them!).

impulser_ - 3 hours ago

I'm pretty sure he's talking about companies and people outsourcing their decision making and thinking to AI and not really about using AI itself.

I don't think using AI to write code is AI psychosis or bad at all, but if you just prompt the AI and believe what it tell you then you have AI psychosis. You see this a lot with financial people and VC on twitter. They literally post screenshots of ChatGPT as their thinking and reasoning about the topic instead of just doing a little bit of thinking themselves.

These things are dog shit when it comes to ideas, thinking, or providing advice because they are pattern matchers they are just going to give you the pattern they see. Most people see this if you just try to talk to it about an idea. They often just spit out the most generic dog shit.

This however it pretty useful for certain tasks were pattern matching is actually beneficial like writing code, but again you just can't let it do the thinking and decision making.

taffydavid - 3 hours ago

This post calls out how you can't argue with these people because they say its fine to ship bugs because the agents will fix them so quickly and at a scale humans can't do!"

the top reply is from someone doing exactly that, arguing "but the agents are so fast!"

foxfired - an hour ago

Maybe this is what will turn software engineering into an Engineering field.

Right know, prompters are setting up whole company infrastructure. I personally know one. He migrated the companies database to a newer Postgres version. He was successful in the end, but I was gnawing my teeth when he described every step of the process.

It sounded like "And then, I poured gasoline on the servers while smoking a cigarette. But don't worry, I found a fire extinguisher in the basement. The gauge says it's empty, but I can still hear some liquid when I shake it..."

If he leaves the company, they will need an even more confident prompter to maintain their DB infrastructure.

miek - 3 hours ago

My very large employer has always been glacially slow on modernization and tech adoption. It may now, oddly enough, become a competitive advantage.

dtnewman - 3 hours ago

If you feel this way, you might like my new CLI tool, Burn, Baby, Burn (those tokens) (https://github.com/dtnewman/burn-baby-burn/tree/main).

Show HN here: https://news.ycombinator.com/item?id=48151287

jimbokun - an hour ago

This reminds me of Rich Hickey’s “Simple Made Easy” and his approach in making Clojure.

Even before LLMs generating entire programs, complex frameworks allowed developers to write the initial versions of programs very quickly, but at the cost of being hard to understand and thus hard to debug or modify.

Some of us are betting that the AIs will always be smart enough to debug, maintain and modify the programs written by AI, no matter how convoluted or complex. I’m not so sure.

gopalv - 3 hours ago

The AI psychosis is not the anti-opinion to the use of AI.

I use AI coding tools every day, but AI tools have no concept of the future.

The selfish thinking that an engineer has when they think "If this breaks in prod, I won't be able to fix it. And they'll page me at 3AM" we've relied on to build stable systems.

The general laziness of looking for a perfect library on CPAN so that I don't have to do this work (often taking longer to not find a library than writing it by hand).

Have written thousands of lines of code with AI tool which ended up in prod and mostly it feels natural, because since 2017 I've been telling people to write code instead of typing it all on my own & setting up pitfalls to catch bad code in testing.

But one thing it doesn't do is "write less code"[1].

[1] - https://xcancel.com/t3rmin4t0r/status/2019277780517781522/

Groxx - 3 hours ago

Bug reports also go down when people lose faith that they will be fixed, because reporting them is often a substantial time commitment. You see it happen pretty regularly as trust in a group/company collapses.

flumpcakes - 2 hours ago

There's a lot of people writing bad code. With AI being forced top down (with the promise of turning people into 10x-ers), we're going to get a lot of people writing bad code 10x faster.

I really do worry - I especially worry about security. You thought supply chain security management was an impossible task with NPM? Let me introduce to AI - you can look forward to the days of AI poisoning where AIs will infiltrate, exfiltrate, or just destroy and there's no way of stopping it because you cannot examine the internals of the system.

AI has turbo charged people's lax attitude to security.

God help us.

charlotte-fyi - an hour ago

I feel in a really weird position where I both really dislike what AI is doing to the experience and practice of writing code, to the point where I want a job doing literally anything else besides using the computer, but also think that these tools are extremely powerful and only getting better.

I think Mitchell's point is well taken -- it's possible for these tools to introduce rotten foundations that will only be found out later when the whole structure collapsed. I don't want to be in the position of being on the hook when that happens and not having the deep understanding of the code base that I used to.

But humans have introduced subtle yet catastrophic bugs into code forever too... A lot of this feels like an open empirical question. Will we see many systems collapse in horrifying ways that they uniquely didn't before? Maybe some, but will we also not learn that we need to shift more to specification and validation? Idk, it just seems to me like this style of building systems is inevitable even as there may be some bumps along the way.

I feel like many in the anti camp have their own kind of reactionary psychosis. I want nothing to do with AI but I also can't deny my experience of using these tools. I wish there were more venues for this kind of realist but negative discussion of AI. Mitchell is a great dev for this reason.

vadepaysa - 3 hours ago

"Just use autoresearch and it will fix your app's memory leaks in an hour" is what I was nonchalantly told by someone who has never written a line of code ever.

I guess what I relate to the most is how dismissive people get about real software engineering work.

I may have skill issues, but I am yet to reach the level of autonomous engineering people tend to expect out of AI these days.

nialse - 4 hours ago

I'm starting to long for the age after AI. When the generative euphoria has settled and all outputs are formally verified based on exquisite architectures and standards.

apalmer - 2 hours ago

I don't think it's helpful to call this psychosis. N Beyond that I don't think it's even irrational.

It is definitely factual that there is a complete paradigm shift in the prioritization of quality in software. It's beyond just AI side effects, and now its own stand alone thing.

There have always been many industries, companies, and products who are low on quality scale but so cheap that it makes good business sense, both for the producer and the consumer.

Definitely many companies are explicitly chosing this business strategy. Definitely also many companies that don't actually realize they are implicitly doing this.

Wether the market will accept the new software quality paradigm or not remains an open question.

mattbrewsbytes - 2 hours ago

The race to invent variants of Gas Towns, Ralph loops, pump out videos, blogs, etc. showing off greenfield development with cleverly named agents running in parallel is another case of engineering people diving head first into Resume Driven Development.

Sure there are industry changing things going on. What if you're working on an app thats a decade old and has had different teams of people, styles, frameworks (thanks to the JS-framework-a-week Resume Driven Development)? Some markdown docs and a loop of agents isn't going to help when humans have trouble understanding what the app does.

Glyptodon - an hour ago

That people don't realize full test coverage just means every line is hit, not that everything is correct is always funny to me. (I don't view as an argument against tests, but with AI it's especially important as if you're aren't careful it'll be very happy to make coverage that is not quite right.)

hooo - 3 hours ago

Why do you all still submit twitter.com links when that domain does not even work?

bsenftner - 3 hours ago

This is a critical communications issue that is becoming what I believe the defining characteristic of "This Age": nobody knows how to discuss disagreement, and because it cannot even be discussed communication ends, followed by blind obedience, forced bullying, retreat and abandonment. This is going to be a hell of a ride, because nobody can really discuss the situation with a rational tone.

nwah1 - 16 minutes ago

Is he talking about github?

mmaunder - an hour ago

Amazing how the dev community is suffering from a similar inability to approach the subject of real world AI efficiencies and business benefits. I don’t think it’s helpful to accuse the other side of psychosis. It disqualifies any data or experience they bring to the conversation.

tacostakohashi - 4 hours ago

"no no, it has full test coverage"

at least at my BigCo, AI is being used for everything - writing slop, writing tests, code reviews, etc.

it would make sense to use AI for writing code, but human code review. or, human code, but AI test cases... or whatever combination of cross-checking, trust-but-verify, human in the loop, etc. people prefer.

i think once it gets used for everything, people have lost the plot, it's the inmates running the asylum.

mattgreenrocks - 3 hours ago

The only way many people learn that the stove is hot is by burning their hands on it.

Let them.

keepamovin - an hour ago

It seems the diagnosis of psychosis is too quick: it seeks to reestablish the frame of expert for the developer identity that is being replaced by it.

“It feels like entire companies are deluded into thinking they don’t need me, but they still need me. Help!”

The broad sentiment across statements of this “AI psychosis” type is clear, but I think the baseline reality is simpler. How can you be so certain it’s psychosis if you don’t know what will unfold? Might reaching for the premature certainty of making others wrong, satisfying that it might be to the ego, be simply a way to compensate the challenges of a changing work environment, and a substitute for actually considering the practical ways you could adapt to that? Might it not be more helpful and profitable to consider “how can I build windmills, ride this wave, and adapt to the changing market under this revolution” than soothing myself with the delusion that all these companies think they don’t need me now, but they’ll be sorry.

The developer role is changing, but it doesn’t have to be an existential crisis. Even though it may feel that way — but probably it’s gonna feel more that way the more you remain stuck in old patterns and over-certainty about how things are doesn’t help, (tho it may feel good). This is the time to be observant and curious and get ready to update your perspective.

You may hide from this broad take (that AI psychosis statements are cope) by retreating into specific nuance: “I didn’t mean it that way, you’re wrong. This is still valid.” But the vocabulary betrays motive. Resorting to clinical derogatory language like “AI psychosis” invokes a “superior expert judgment” frame immediately, and in zeitgeist context this is a big tell. It signifies a need to be right, anda deeply defensive pose rather than a clear assay of what’s real in a rapidly changing world. The anxiety driving the language speaks far louder than any technical pedantry used to justify it, and is the most important and IMO profitable thing to address.

throwawaypath - 3 hours ago

Mitchellh is on to something. Some of the AI products I've seen seem like psychosis hallucinatory fever dreams, using terms and concepts that have no meaning. Funding? $50,000,000 pre-seed.

linkregister - 3 hours ago

I don't doubt there are companies totally misusing coding agents and LLMs in production. There are also real companies with real revenue and solid architecture using LLMs to deliver products. There are also companies with real revenue and rapidly accumulating tech debt.

Eventually the companies that can't cope with undisciplined engineering will succumb to unacceptable reliability and be outcompeted, just like in the "move fast and break things" era.

robotswantdata - 4 hours ago

Most labs are shilling “AI worker” dreams to these very companies

hoppp - an hour ago

Pointing out the obvious.

A lot of companies have been under AI psychosis for years and will be forever.

JeremyJaydan - an hour ago

If you don't use it you lose it, and a lot of people are losing it..

insane_dreamer - 38 minutes ago

Just talked to an exec yesterday about their multinational company, where the newly-installed CEO just came in with "everyone needs to be using AI" and "we should be doing everything with AI".

I cautioned them that this a terrible idea -- you have business people who don't know what they're talking about, and all they know if "if we don't 'do AI' we'll be left behind because our competitors are 'doing AI'" (whatever tf "doing AI" means).

Yes, LLMs are a great tool. But they're not like some magic bullet you stick into everything. Use it where it makes sense, and treat it like you would other tools.

You make "doing AI" some kind of KPI in your org, and you're going to have people "doing AI" amazingly (LOC counts! tokens burned! tickets cleared!) while not actually being more productive, and potentially building something that is going to come down on your head for the next team to "clean up the AI mess".

Ifkaluva - 2 hours ago

The Twitter post doesn’t even document some of the most psychotic things that are happening.

slopinthebag - 3 hours ago

I have a ton of respect for Mitchell - I didn't really know who he was until Ghostty but his writings and viewpoints on AI seem really grounded and make the most sense to me. Including this one.

Many people on this forum are suffering under this same psychosis.

ivanjermakov - 3 hours ago

Deprecating immature workflows (LLM agents in this case) is much simpler and faster than building them from scratch. Many companies get this risk assessment right. The case where being wrong is much more costly than being right.

topherPedersen - 2 hours ago

Hype & greed are a hell of a drug

- 4 hours ago
[deleted]
spicyusername - 3 hours ago

We're definitely in the mess around phase of AI adoption.

I don't think it's super clear what we'll find out.

We've all built the moat of our careers out of our expertise.

It is also very possible that expertise will be rendered significantly less valuable as the models improve.

Nobody ever cared what the code looked like. They only ever cared if it solved their problem and it was bug free. Maybe everything falls apart, or maybe AI agents ship code that's good enough.

Given the state of the industry were clearly going to find out one way or the other, hah!

LunicLynx - 3 hours ago

Either this or we humans are out of the picture soon.

the13 - 2 hours ago

The entire problem is vibe coding is only good for demos, prototyping and finding signs of product market fit without actually releasing a product into the market.

You should not release a product into the market unless you have a good enough product that can keep you and your client compliant, safe and secure - including not leaking their customer info all over the place.

Prompt injection risk, etc. are massive for agentic AI without deterministic guardrails that actually work in practice.

Stop testing in production if you're shipping in a regulated industry. Ridic!

If you're not technical, you can get someone who is after signs of p-m fit, demos, but BEFORE deployment. This is common sense and best practices but startup bros dgaf because they're just good at sales and marketing & short term greedy.

Comical.

crnkofe - 3 hours ago

Sounds pretty accurate. Bunch of comments on this thread sound like AI is some kind of a new doomsday cult. The most annoying thing I find personally is that all engineering principles are getting crushed by non techies. Management counting token usage, forcing agent use, reducing headcount in the name of productivity gain. Devs building bridges but nobody knows what the bridge is, what are the standards to which it was built, how it works and how to maintain it. VCs counting extra money claiming chasing the holy profit is the future. The abundance of engineering apathy is disturbing.

madrox - an hour ago

I saw this first hand at a company, and I think this is what happens when you combine FOMO with an utter lack of industry best practices. No one knows where they are going, but are convinced they are not getting there fast enough.

What's more, the only people they talk to about it are others at the same company. There is no external touchstone. There are power dynamics from hierarchy. No new ideas other than what is generated within the company. In other circumstances, this is a textbook environment for radicalization.

I would encourage all leadership to take a deep breath. You have time to think slow.

matt3210 - 2 hours ago

Less users can be the cause of less bug reports

awesomeusername - an hour ago

If you know these things you can take them into account while driving the AI.

Sorry, I don't buy your argument

CodingJeebus - 4 hours ago

Anyone who's taken VC funding has no choice. More money has been spent on AI commercialization than the atomic bomb, the US interstate build-out, the ISS and the Apollo program combined. Failure is going to be catastrophic and therefore, one tied to this ship cannot accept a world in which it fails.

nunez - 3 hours ago

Welcome to the club, Mitchell! Pizza's to the right.

In all seriousness...well, yeah. AI is a monkey's paw, and that's how monkey paws work. So many movies and books warned us!

dudul - an hour ago

Totally unrelated pet peeve of mine, I hate when people write this: "MTBF vs MTTR (mean-time-between-failure vs. mean-time-to-recovery)".

You first use the full words and then introduce the acronym that you're going to use in the rest of the text: "Mean Time Between Failures (MTBF) vs. Mean Time to Recovery (MTTR)".

With the latter, readers understand the term immediately, even if they don’t know the acronym. And they don't have to read these weird letters before getting the explanation.

singpolyma3 - 3 hours ago

This is... Not what psychosis means? Being wrong is not psychosis

leeoniya - 3 hours ago

> "no no, it has full test coverage"

i don't have enough fingers (and toes) to count how many times i've demonstrated that "100% coverage" is almost universally bullshit.

LogicFailsMe - 3 hours ago

I shut down AI Agent fanatics on the regular. But chop one head off there and two take its place. And I say that as someone working with Claude and Codex daily. While they are both incredibly good at clearly described and defined atomic tasks, application scope makes them lose their minds and the slop ensues.

tamimio - 2 hours ago

The hype or psychosis is mainly by mediocre/non expert/middle manager/you name it, especially when a person who never wrote a single line of code suddenly is making a wall of text, and it actually works!? Oh my!!

But in reality, anyone who knows their field and are going after certain specific issue, they will find soon how AI is nothing but an assistant, sure it can help and automate some stuff, but that’s it, you need to keep it leashed and laser focused on that specific issue. I personally tried all high end ones, and I found a common theme, they are designed to find a solution or an answer no matter what, even if that solution is a workaround built on top of workarounds, it’s like welding all sort of connections between A and B resulting in a fractal structure rather than just finding a straight path, if you keep it going and flowing on its own, the results are convoluted and way over complicated, and not the good complexity, the bad kind.

mrwaffle - 2 hours ago

Saying the _quiet_ part out loud.

- 3 hours ago
[deleted]
bolangi - 3 hours ago

When war psychosis is not enough....

daneel_w - 2 hours ago

I work for a small telecom services provider whose current VP immediately set an AI course when stepping on board 6 months ago. Involving AI in everything and every task is now our first priority - across all employee segments, not just us system developers - and leadership is embarking on a program to measure employees' AI usage levels as a means to gauge everyone's individual efficiency. It's like the era of the evangelic crypto bros all over again.

LAC-Tech - 2 hours ago

I am really looking for more reasoned approaches to AI.

I am very close to using it as a pair programmer, but with me actually coding. I am just so tired of fixing its mistakes.

perching_aix - 3 hours ago

I'm going through a mixed experience regarding this, personally.

Management is really pushing AI. It's obnoxious, and their idea on how it fits into my team's job specifically is completely, hilariously detached from reality. On the off chance someone says something reasonable, unless it fits the mold, it's immediately discarded. The mold being "spec driven development". We're not even a product team for crying out loud. I straight up started skipping these meetings for the sake of my sanity. It's mindwash, and it's genuinely dizzying. The other reason I stopped attending is because it ironically makes me more disinterested in AI, which I consider to be against my personal interests on the long run overall.

On the flipside, I love using Claude (in moderation). It keeps pulling off several very nice things, some of which Mitchell touched on in this post (the last one):

- I write scripts and automation from time to time; Claude fleshes them out way better with way more safety features, feature flags, and logging than I'd otherwise have capacity to spend time on

- Claude catches missed refactors and preexisting defects, and does a generally solid pass checking for defects as a whole

- Claude routinely helps with doing things I'd basically never be able to justify spending time on. Yesterday, I one-shotted an entire utility application with a GUI to boot, and it worked first try; I was beyond impressed.

- Claude helped me and a colleague do some partisan cross-team investigation in secret. We're migrating <thing> and we were evaluating <differences>. There was a lot of them. Management was in a limbo, unsure what to do, flip-flopping between bad options. In a desperate moment, I figured, hey, we kinda have a thing now for investigating an inhuman amount of stuff in detail - so I've put together a care package for my colleague with all our code, a bunch of context, a capture of all the input data for the past one week, and all the logs generated. Colleague put his team's side of the story next to it, and with the help of Claude, did some extremely nice cross-functional investigation. Over the course of a few weeks, he was able to confirm like a dozen showstopper bugs, many of which would have been absolutely fiendish if not impossible to fix (or even catch) if we went live without knowing about them. One even culminated in a whole-ass solution re-architecturing. We essentially tore down a silo wall with Claude's help in doing this.

So ultimately, it really is a mixed bag, with some really deep lowpoints and some really nice higlights. I also just generally find it weird that a technical tool [category] is being pushed down people's throats with a technical reasoning, but by management. One would think this goes bottom up, or is at least a lot more exploratory. The frenzy is real.

Apocryphon - 2 hours ago

Make the most of it. Their delusion is your opportunity.

gverrilla - 2 hours ago

'AI psychosis' is a slop concept.

andreasgl - 4 hours ago

https://xcancel.com/mitchellh/status/2055380239711457578

squirrelon - 14 minutes ago

[flagged]

- 15 minutes ago
[deleted]
panavm - 3 hours ago

[flagged]

zombiwoof - 3 hours ago

[dead]

klashn - 3 hours ago

[flagged]

jgbuddy - 3 hours ago

[flagged]

senordevnyc - 4 hours ago

Assuming he’s right, I don’t see how that constitutes “psychosis”, as opposed to this beyond yet another of a billion examples of companies jumping on a bandwagon / cargo cult, and then learning they took it too far.

And also, he might not be right. But the good news is, we’ll all get to find out together!

selectively - 3 hours ago

I do not believe 'AI psychosis' is an actual thing.

elevation - 3 hours ago

Mitchell aches because his career has been solving broadly scoped problems by building a collection of thoughtful primitives for others to extend. LLMs seem to do the opposite but at great speed, and it hurts to watch.

weinzierl - 4 hours ago

"its fine to ship bugs because the agents will fix them so quickly and at a scale humans can't do!"

Hmm, I agree with the point OP is making, but I'm not so sure this is the best supporting argument. The bottleneck is finding the bugs and if he'd criticized people saying AI will be the panacea to that I'd be with him, but people saying agents are fast and good at fixing human found bugs is nothing I'd object to.

Agents are fixing bugs so quickly and at a scale humans can't do already.

woeirua - 3 hours ago

This doesn’t constitute AI psychosis. His argument is that we need to retain understanding of the systems we use, but there’s no compelling argument as to why that is the case. (I get that people are going to be offended by that statement, but agents are already better than the average software engineer. I don’t see why we need to fight this, except for economic insecurity caused by mass layoffs.)

It all just feels like horse drawn carriage operators trying to convince automobile drivers to stop driving.