The Hater's Guide to the AI Bubble

wheresyoured.at

181 points by lukebennett 10 hours ago


wulfstan - 7 hours ago

In July 2023, I wrote this to a friend:

"...being entirely blunt, I am an AI skeptic. I think AI and LLM are somewhat interesting but a bit like self-driving cars 5 years ago - at the peak of a VC-driven hype cycle and heading for a spectacular deflation.

My main interest in technology is making innovation useful to people and as it stands I just can't conceive of a use of this which is beneficial beyond a marginal improvement in content consumption. What it does best is produce plausible content, but everything it produces needs careful checking for errors, mistakes and 'hallucinations' by someone with some level of expertise in a subject. If a factory produced widgets with the same defect rate as ChatGPT has when producing content, it would be closed down tomorrow. We already have a problem with large volumes of bad (and deceptive!) content on the internet, and something that automatically produces more of it sounds like a waking nightmare.

Add to that the (presumed, but reasonably certain) fact that common training datasets being used contain vast quantities of content lifted from original authors without permission, and we have systems producing well-crafted lies derived from the sweat of countless creators without recompense or attribution. Yuck!"

I'll be interested to see how long it takes for this "spectacular deflation" to come to pass, but having lived through 3 or so major technology bubbles in my working life, my antennae tell me that it's not far off now...

K0balt - 5 hours ago

I too am deeply skeptical of the current economic allocation, but it’s typical of frontier expansions in general.

Somehow, in AI, people lost sight of the fact that transformer architecture AI is a fundamentally extractive process for identifying and mining the semantic relationships in large data sets.

Because human cultural data contains a huge amount of inferred information not overtly apparent in the data set, many smart people confused the results with a generative rather than an extractive mechanism.

….To such a point that the entire field is known as “generative” AI, when fundamentally it is not in any way generative. It merely extracts often unseen or uncharacterized semantics, and uses them to extrapolate from a seed.

There are, however, many uses for such a mechanism. There are many, many examples of labor where there is no need to generate any new meaning or “story”.

All of this labor can be automated through the application of existing semantic patterns to the data being presented, and to do so we suddenly do not need to fully characterize or elaborate the required algorithm to achieve that goal.

We have a universal algorithm, a sonic screwdriver if you will, with which we can solve any fully solved problem set by merely presenting the problems and enough known solutions so that the hidden algorithms can be teased out into the model parameters.

But it only works on the class of fully solved problems. Insofar as unsolved problems can be characterized as a solved system of generating and testing hypothesis to solve the unsolved, we may potentially also assail unsolved problems with this tool.

frithsun - 7 hours ago

I believe this is a "good" bubble in the sense that the 19th century railroad bubble and original dot com bubble both ended up invested in infrastructure that created immense value.

That said, all of these LLMs are interchangeable, there are no moats, and the profit will almost entirely be in the "last mile," in local subject matter experts applying this technology to their bespoke business processes.

jakobnissen - 8 hours ago

I think the author's take is overly bleak. Yes, he supports his claim that AI businesses are currently money pits and unsustainable. But I don't think it's reasonable to claim that AI can't be profitable. This whole thing is moving so extremely fast. Models are getting better by the month. Cost is rapidly coming down. We broadly speaking still don't know how to apply AI. I think it's hubris to claim that, in the wake of this whole bubble noone will figure out how to use AI to provide value and noone will be profitable.

usrnm - 8 hours ago

Are we in a bubble that's going to pop and take a large part of the economy with it? Almost certainly. Does it mean that the AI is a scam? Not really. After all, the Internet did not disappear after the dotcom burst, and, actually, almost everything we were promised by the dotcoms became reality at some point.

tim333 - 9 minutes ago

>I have written hundreds of thousands of words with hundreds of citations, and still, to this day, there are people who claim I am somehow flawed in my analysis...

Says the PR guy who discovered AI a couple of years ago and now knows it all and that all the AI experts are wrong.

hotpotat - 7 hours ago

Lots of in-depth analysis, but I think the author is very clearly emotionally invested to the point that they are only drawing conclusions that justify and support their emotions. I agree that we’re in a bubble in the sense that a lot of these companies will go bankrupt, but it won’t be Google or Anthropic (unless Google makes a model that’s an order of magnitude better or order of magnitude cheaper with capability parity). Claude is simply too good at coding in well-represented languages like Python and Typescript to not pay hundreds of dollars a month for (if not thousands, subsidized by employers). These companies are racing to have the most effective agents and models right now. Once the bottleneck is clearly humans’ ability the specify the requirements and context, reducing the cost of the models will be the main competitive edge, and we’re not there yet (although even now the better you are at providing requirements and context, the more effective you are with the models). I think that once cost reduction is the target, Google will win because they have the hardware capabilities to do so.

thoroughburro - 8 hours ago

The bubble will pop, just like the web bubble popped; and that’s going to suck. AI technologies will remain and be genuinely transformative, just like the web remained and was transformative (for good and ill).

jsnell - 34 minutes ago

The analysis is just bogus. He is basically comparing two years of inflated AI capex estimates to a low-ball estimate of one year of trailing revenue.

Let's unpack that a bit.

Capex is spending on capital goods, with the spending being depreciated over the expected lifetime of the good. You can't compare a year of capex to a year of revenue: a truck doesn't need to pay for itself in year 1, it needs to pay for itself over 10 or 20 years. The projected lifetime of datacenter hardware bought today is probably something like 5-7 years (changes to the depreciation schedule are often flagged in earnings releases, so that's a good source for hard data). The projected lifetime of a new datacenter building is substantially longer than that.

Somehow Zitron manages to not make a comparison that's even more invalid than comparing one year of Capex to one year of revenue: he basically ends up comparing a year of revenue to two years of Capex. So now the truck needs to pay for itself in six months.

They way you'd need to think about this is to for example consider what the return on the capital goods bought in 2024 was in 2025. But that's not what's happening here. Instead the article is basically expecting a GPU that's to be paid for and installed in late 2025 to produce revenue in early 2025. That's not going to happen. In a stable state, this would not matter so much. But this is not a stable state. Both capex and revenue are growing rapidly, and revenue will lag behind.

What about the capex being inflated and the revenue being low-balled?

None of us really know for sure how much of the capex spending is on things one might call AI. But the pre-AI capex baseline of these companies was tens of billions each. Probably some non-AI projects no longer happen so that the companies can plow more money into AI capex, but it absolutely won't be all of it like the article assumes. As another example, why in the world is Tesla being included in the capex numbers? It's just blatant and desperate padding of the numbers.

As for the revenue, this is mostly analyst estimates rather than hard data (with the exception of Microsoft, though Zitron is misrepresenting the meaning of run rate). Given what he has to say about . But more importantly, they are analyst estimates of tiny subset of the revenue that GPUs/TPUs would produce. What happens when Amazon buys a GPU? Some of those GPUs will be used internally. Some of them will be used to provide genai API services. Some might be used to provide end-user AI proucts. And some of them will be rented out as GPUs. Only the two middle ones would be considered AI revenue.

I don't know what the fair and comparable numbers would be, am not aware of a trustworthy public source, and won't even try to guess at them. But when we don't know what the real numbers are, the one thing we should not do is use obviously invalid ones and present them as facts.

> I am only writing with this aggressive tone because, for the best part of two years,

Zitron's entire griftluencer schtick has always been writing aggressive and often obscenity-laden diatribes. Anyway, please don't forget to subscribe for just $7/month, and remember that he just loves to write and has no motive for clickbait or stirring up some outrage.

jjjggggggg - 8 hours ago

Keep up the good work, but this could be said with more strength and in far fewer words by removing the indulgent rambling.

bibelo - 8 hours ago

The irony is that I asked ChatGPT to make a summary in french. However, i'm tired of the AI bubble and seeing half of my twitter feed filled w AI announcements and threads

CharlesXY - 8 hours ago

This is quite refreshing to read, while I would classify myself more in the group of “optimists”, I do believe there is a severe lack of skepticism, and those that share negative or more conservative views are indeed held to different standard to those who paint themselves as "optimists". Unlike other trends before, the wave of grifters in the AI space is atounding, anything can be “AI-powered” as long as its a wrapper/ chatbot

bgwalter - 8 hours ago

SoftBank is also more cautious and the "$500 billion" Stargate project that was hyped in the White House will just build a single data center by the end of 2025:

https://www.wsj.com/tech/ai/softbank-openai-a3dc57b4

elktown - 7 hours ago

What's clear is that the hype has reach such a critical mass that people are comfortable enough to publicly and shamelessly extrapolate extraordinary claims based purely on gut feeling. Both here on HN and by complete laymen elsewhere.

AI-optimist or not, that's just shocking to me.

tomjuggler - 5 hours ago

Best rant I have read in such a long time. Subscribed despite the fact that I am all-in on AI for coding (plus much more) and disagree completely with the author's point of view.

billy99k - 7 hours ago

With current LLMs, my productivity is increased by at least 50%. This will only get better over time as efficiency is gained and hardware gets cheaper.

camillomiller - 9 hours ago

Thanks, Ed Zitron. This article is to me like a glass of ice water to somebody in hell.

pestatije - 7 hours ago

damn he doesn't say when the shorts should start

xela79 - 7 hours ago

make a technology very affordable, get people hooked. Then when LLM have basically destroyed the open web, charge more for accessing and searching that wealth of human created knowledge. Profit $$$

Ethical approach? hell no. What do you expect from an unregulated capitalistic system.

frozenseven - 7 hours ago

Hey, it's the guy who has been predicting the imminent collapse of AI for three years now! As I understand, he's a former video game journalist and being anti-AI is now his full-time thing. Saying it's all useless, fake, evil, etc.

A poor man's Gary Marcus, basically.

andrewstuart - 8 hours ago

These sound very much in tone like the criticisms of Web 1.0

AI/LLMs are an infant technology, it’s at the beginning.

It took many many years until people figured out how to use the internet for more than just copying corporate brochures into HTML.

I put it to you that the truly valuable applications of AI/LLMs are yet to be invented and will be truly surprising when they come (which they must of course otherwise we’d invent them now).

Amdahl says we tend to overestimate the value of a new technology in the short term and underestimate it in the long term. We’re in the overestimate phase right now.

So I’d say ignore the noise about AI/LLMs now - the deep innovations are coming.

adverbly - 8 hours ago

.

louwrentius - 8 hours ago

AI is a temporary buoy for FAANG and the tech industry to keep the financial markets happy while they switch to their next source of growth:

Military contracts.

I hope people understand the irony, but to spell it out: they need to live on government money to sustain growth.

Corporate welfare while 60% of the USA population doesn't have the money to cover a 1000$ emergency.