Are we repeating the telecoms crash with AI datacenters?

martinalderson.com

113 points by davedx 9 hours ago


nuc1e0n - 3 hours ago

The article claims that AI services are currently over-utilised. Well isn't that because customers are being undercharged for services? A car when in neutral will rev up easily if the accelerator pedal is pushed even very slightly, because there's no load on the engine. But in gear the same engine will rev up much less when the accelerator is pushed the same amount. Will there be the same overutilisation occurring if users have to financially support the infrastructure, either through subscriptions or intrusive advertising?

I doubt it.

And what if the technology to locally run these systems without reliance on the cloud becomes commonplace, as it now is with open source models? The expensive part is in the training of these models more than the inference.

iambateman - 2 hours ago

The thing that makes AI investment hard to reason about for individuals is that our expectations are mostly driven by a single person’s usage, just like many of the numbers reported in the article.

But the AI providers are betting, correctly in my opinion, that many companies will find uses for LLM’s which are in the trillions of tokens per day.

Think less of “a bunch of people want to get recipe ideas.”

Think more of “a pharma lab wants to explore all possible interactions for a particular drug” or “an airline wants its front-line customer service fully managed by LLM.”

It’s unusual that individuals and industry get access to basically similar tools at the same time, but we should think of tools like ChatGPT and similar as “foot in the door” products which create appetite and room to explore exponentially larger token use in industry.

dust42 - an hour ago

OpenAI has 800,000,000 weekly users but only 20,000,000 are paying while 780,000,000 are free riding. Should they by accident under provision then they could simply remove the freebee and raise the prices for the paying clients. But that is not what they want.

IMHO the investors are betting on a winner-takes-it-all market and that some magic AGI will be coming out of OpenAI or Anthropic.

The questions are:

How much money can they make by integrating advertising and/or selling user profiles?

What is the model competition going to be?

What is the future AI hardware going to be - TPUs, ASICs?

Will more people have powerful laptops/desktops to run a mid-sized models locally and be happy with it?

The internet didn't stop after the dotcom crash and the AI wont stop either should there be a market correction.

mwkaufma - 16 minutes ago

Simultaneous claims that 'agentic' models are dramatically less efficient, but also forecasts efficiency improvements? We're in full-on tea-leaves-reading mode.

paulorlando - an hour ago

The 2001 telecoms crash drove benefits for companies that came later in the availability of inexpensive dark fiber after the bubble popped. WorldCom, ICG, Williams sold off to Verizon, Level 3, Teleglobe, and others. That in turn helped future Internet companies gain access to plentiful and inexpensive bandwidth. Cable telephony companies such as Cablevision Systems, Comcast, Cox Communications, and Time Warner, used the existing coaxial connections into the home to launch voice services.

gmm1990 - 3 hours ago

Some of the utilization comparisons are interesting, but the article says 2 trillion was spent on laying fiber but that seems suspicious.

recursive4 - 3 hours ago

Stylistically, this smells like it was copy and pasted from straight out Deep Research. Substantively, I could use additional emphasis on the mismatch between expectations and reality with regards to telco debt-repayment schedule.

asplake - 3 hours ago

Yes or no conclusions aside (and despite its title, the article deserves better than that), the key point is I think this one: “But unlike telecoms, that overcapacity would likely get absorbed.”

Havoc - 3 hours ago

Don’t think looking at power consumption of b200s is a good measure of anything. Could well be an indication of higher density rather than hitting limits and cranking voltage to compensate

kqr - 3 hours ago

Is there a way in which this is good for a segment of consumers? When the current gen of GPUs are too old, will the market be flooded with cheap GPUs that benefit researchers and hobbyists who therwis would not afford them?

turtlesdown11 - 25 minutes ago

Amazing article, I found it fascinating.

> You can already use Claude Code for non engineering tasks in professional services and get very impressive results without any industry specific modifications

After clicking on the link, and finding that Claude Code failed to accurately answer the single example tax question given, very impressive results! After all, why pay a professional to get something right when you can use Claude Code to get it wrong?

venturecruelty - 20 minutes ago

No, because at least dark fiber is useful. AI GPUs will be shipped off to developing nations to be dissolved for rare earth metals once the third act of this clown show is over.

avazhi - 44 minutes ago

No, because the datacenters will get used. The demand side exists, whether it’s LLM AIs or something completely different that isn’t AI related. That’s very different from a crash where there is absolutely nothing valuable/useable/demanded underneath the bubble.

gizajob - 4 hours ago

Yes

mNovak - 2 hours ago

Nice article; far from bullet-proof, but it brings up some interesting points. HN comments are vicious on the topic of AI non-bubbles.

kragen - 4 hours ago

This seems to be either LLM AI slop or a person working very hard to imitate LLM writing style:

The key dynamic: X were Y while A was merely B. While C needed to be built, there was enormous overbuilding that D ...

Why Forecasting Is Nearly Impossible

Here's where I think the comparison to telecoms becomes both interesting and concerning.

[lists exactly three difficulties with forecasting, the first two of which consist of exactly three bullet points]

...

What About a Short-Term Correction?

Could there still be a short-term crash? Absolutely.

Scenarios that could trigger a correction:

1. Agent adoption hits a wall ...

[continues to list exactly three "scenarios"]

The Key Difference From S:

Even if there's a correction, the underlying dynamics are different. E did F, then watched G. The result: H.

If we do I and only get J, that's not K - that's just L.

A correction might mean M, N, and O as P. But that's fundamentally different from Q while R. ...

The key insight people miss ...

If it's not AI slop, it's a human who doesn't know what they're talking about: "enormous strides were made on the optical transceivers, allowing the same fibre to carry 100,000x more traffic over the following decade. Just one example is WDM multiplexing..." when in fact wavelength division multiplexing multiplexing is the entirety of those enormous strides.

Although it constantly uses the "rule of three" and the "negative parallelisms" I've quoted above, it completely avoids most of the overused AI words (other than "key", which occurs six times in only 2257 words, all six times as adjectival puffery), and it substitutes single hyphens for em dashes even when em dashes were obviously meant (in 20 separate places—more often than even I use em dashes), so I think it's been run through a simple filter to conceal its origin.

fnord77 - 4 hours ago

> This is the opposite of what happened in telecoms. We're not seeing exponential efficiency gains that make existing infrastructure obsolete. Instead, we're seeing semiconductor physics hitting fundamental limits.

What about the possibility of improvements in training and inference algorithms? Or do we know we won't get any better than grad descent/hessians/etc ?

imvetri - 2 hours ago

No. AI data center, or any data center is designed with incorrect data structure resulting in over utilisation of computing resource.

positron26 - 2 hours ago

Hardware growth is slow and predictable, but one breakthrough algorithm completely undercuts any finance hypothesis premised on compute not flowing out of the cloud and back to the edges and into the phones.

This is a kind of risk that finance people are completely blind to. Open AI won't tell them because it keeps capital cheap. Startups that must take a chance on hardware capability remaining centralized won't even bother analyzing the possibility. With so many actors incentivized to not know or not bother asking the question, there's the biggest systematic risk.

The real whiplash will come from extrapolation. If an algorithm advance shows up promising to halve hardware requirements, finance heads will reason that we haven't hit the floor yet. A lot of capital will eventually re-deploy, but in the meantime, a great deal of it will slow down, stop, or reverse gears and get un-deployed.

rhetocj23 - an hour ago

[dead]

MarkusQ - 4 hours ago

Holly cow, we've found an exception to Betteridge's Law of Headlines! Talk about burying the lede!