The AI boom is causing shortages everywhere else
washingtonpost.com193 points by 1vuio0pswjnm7 11 hours ago
193 points by 1vuio0pswjnm7 11 hours ago
"JPMorgan calculated last fall that the tech industry must collect an extra $650 billion in revenue every year — three times the annual revenue of AI chip giant Nvidia — to earn a reasonable investment return. That marker is probably even higher now because AI spending has increased."
That pretty much tells you how this will end, right there.
Nvidia invests $100bn in OpenAI, who buy $100bn of Nvidia chips, who invest the $100bn revenue in OpenAI, who buy $100bn in Nvidia chips, and round it goes. That's an easy $600bn increase in tech industry revenue right there.
Someone in management read and misunderstood "The Velocity of Money" (https://en.wikipedia.org/wiki/Velocity_of_money)?
Or understood way too well. I promise it isn't Altman that will be left destroyed or jobless if this crashes.
Total US GDP is ~31 trillion, so that's only like 5%. I think it's conceivable that AI could result in ~5% of GDP in additional revenue. Not saying it's guaranteed, but it's hardly an implausible figure. And of course it's even less considering global GDP.
Yup. If you follow the links to the original JP Morgan quote, it's not crazy:
> Big picture, to drive a 10% return on our modeled AI investments through 2030 would require ~$650 billion of annual revenue into perpetuity, which is an astonishingly large number. But for context, that equates to 58bp of global GDP, or $34.72/month from every current iPhone user...
> or $34.72/month from every current iPhone user...
As a current iPhone user, I'm not signing up for that especially if it is on top of the monthly cell service fee.
I do realize though that you were trying to provide useful context.
But think about it this way: something simple like Slack charges $9/month/person and companies already pay that on many behalf. How hard would it be to imagine all those same companies (and lots more) would pay $30/month/employee for something something AI? Generating an extra $400 per year in value, per employee, isn't that much extra.
Most people in the economy do not use Slack. That tool may be most beneficial to those people who stand to lose jobs to AI displacement. Maybe after everyone is pink-slipped for an LLM or AI chatbot tool the total cost to the employer is reduced enough that they are willing to spend part of the money they saved eliminating warm bodies on AI tools and willing to pay a higher per employee price.
I think with a smaller employee pool though it is unlikely that it all evens out without the AI providers holding the users hostage for quarterly profits' sake.
That AI will have to be significantly preferable to the baseline of open models running on cheap third-party inference providers, or even on-prem. This is a bit of a challenge for the big proprietary firms.
A lot of iPhone users will be given a subscription via their job. If they still have a job at that point.
This is true though I think even if the employer provides all this on a per employee basis, the number of eligible employees, after everyone who stands to lose a job because of a shift to AI tools, will be low enough that each employee will need to add a lot of value for this to be worth it to an employer so the stated number is probably way too low. Ordinary people may just migrate from Apple products to something that is more affordable or, in the extreme case, walk away from the whole surveillance economy. Those people would not buy into any of this.
Why you even said you wouldn’t subscribe? It’s not relevant in the slightest.
So for that GDP gotta show growth of over 5% extra to other growth sources (so total yearly growth will be pretty high). I doubt this will materialise
Have you ever seen US GDP go up 5% yearly for several years?
That’s the bet! last time we had that growth was for a few years during the dotcom, followed by a lost decade of growth in tech stocks
Doesn't have to go up. It's also fine if they replace other parts of the economy.
The quote is about a one-time increase in growth of 5 percentage points. Not multiple years or forever.
Or obviously it can be spread out, e.g. ~1% additional increase over 5 years.
It cannot be sustained with just one-time growth. Capital always has to grow, or it will decrease. If this bubble actually manages to deliver interest, this will lead to the bubble growing even larger, driving even more interest.
China did it. It’s not inconceivable.
China’s GDP per capita fell for the first 40 years of CCP rule, making it way easier to have constant growth after that period. https://en.wikipedia.org/wiki/Economic_history_of_China_(191...
Developed countries have slow growth because they need to invent the improvements not just copy what works from other countries.
The chart you listed is for the years before the CCP won the civil war in 1949. But agreed that many of the problems overcome were also problems that were created after the war.
https://en.wikipedia.org/wiki/Communist-controlled_China_(19...
Starting at 1949 is overly generous IMO, but yes the purges that followed didn’t help.
Yeah but China actively works in the best interest of their entire population.
You're saying that the entire increase in US GDP goes into the pockets of like 5 companies.
The future is not looking bright at all....
I only have a meme to describe what we are facing https://imgur.com/a/xYbhzTj
> I only have a meme to describe what we are facing https://imgur.com/a/xYbhzTj
I don't recognize that cartoon and there's no audio. I'm going to need help with that one.
> The future is not looking bright at all....
The tech industry going through a boom and settling back down at a higher place than before isn't the end of the world. They all start merging together soon.
There has never been an industry that does that consistently (that wasn't government subsidised at least).
We got lucky with the dotcom bubble.
There's no guarantee of anything, and it's totally possible for the industry to collapse and stay that way.
I was expecting to see Mark Baum on the phone saying "hey, we're in a bubble".
> "must collect an extra $650 billion in revenue every year" paired with the idea that automating knowledge work can cause a short-term disruption in the economy doesn't seem logical to me.
I find it funny that Microsoft is scaling compute like crazy and their own products like Copilot are being dwarfed by the very models they wish to serve on that compute.
I run the numbers on hyperscaler AI capex and the math is not going to work out.
With these assumptions:
– Big 4 keep spending at current pace for 3 more years
– Returns only start showing after aprox 2 years
– Heavy competition with around 20% operating margin on AI and Cloud
– Use of 9% cost of capital
This is the current reality:
AWS aprox $142B/yr
Azure aprox $132B/yr
Google Cloud around $71B/yr
Combined its about $330B to $340B annual cloud revenue today
And lets says Global public cloud market of $700B total today.
To justify the current capex trajectory under those assumptions, by year 3 the big hyperscalers would need roughly $800B to $900B in new annual revenue just to earn a normal return on the capital being deployed.
That implies combined hyperscaler cloud and AI revenue going from: $330B today to $1.2T within 3 years :-))
In other words...Cloud would need to roughly do 4× in a very short window, and the incremental revenue alone would exceed the entire current global cloud market.
So for the investment wave to make financial sense, at least one of these must be true:
1 Cloud/AI spending globally explodes far beyond all prior forecasts
2 AI massively increases revenue/profit in ads, software, commerce and not just cloud
3 A winner takes all outcome where only 1 or 2 players earn real returns
4 Or a large share of this capex never earns an economic return and is defensive
People keep modeling this like normal cloud growth. But what we have is insanity
The question is not "is it a bubble". Bubbles are a desirable feature of the American experiment. The question is "will this bubble lay the foundation for growth and destroy some value when it pops, or will it only destroy value"
What can we use fields of GPUs for next?
Whatever happened to crypto/blockchain ASICs
Nothing happened to them, they're still around; just consolidated into industrial operations.
The "twist" is they rot as e-waste every 18 months when newer models arrive, generating roughly 30,000 metric tonnes of eWaste annually[0] with no recycling programmes from manufacturers (like Bitmain)... which is comparable to the entire country of the Netherlands.
Turns out the decentralised currency for the people is also an environmental disaster built on planned obsolescence. Who knew.
[0]: https://www.sciencedirect.com/science/article/abs/pii/S09213...
AI, obviously! A bubble doesn't mean demand vanishes overnight. There is - at current price points - much more demand than supply. That means the market can tolerate price hikes whilst keeping the accelerators busy. It seems likely that we're still just at the start of AI demand as most companies are still finding their feet with it, lots of devs still aren't using it at all, lots of business workflows that could be automated with it aren't and so on. So there is scope for raising prices a lot as the high value use cases float to the top, maybe even auctioning tokens.
Let's say tomorrow OpenAI and Anthropic have a huge down round, or whatever event people think would mark the end of the bubble. That doesn't mean suddenly nobody is using AI. It means they have to rapidly reduce burn e.g. not doing new model versions, laying off staff and reducing the comp of those that remain, hiking prices a lot, getting more serious about ads and other monetized features. They will still be selling plenty of inferencing.
In practice the action is mostly taking place out of public markets. We won't necessarily know what's happening at the most exposed companies until it's in the rear view mirror. Bubbles are a public markets phenomenon. See how "ride sharing"/taxi apps played out. Market dumping for long periods to buy market share, followed by a relatively easy transition to annual profitability without ever going public. Some investors probably got wiped along the way but we don't know who exactly or by how much.
Most likely outcome: AI bubble will deflate steadily rather than suddenly burst. Resources are diverted from training to inferencing, new features slow down, new models are weaker and more expensive than new models and the old models are turned off anyway. That sort of thing. People will call it enshittification but it'll really just be the end of aggressive dumping.
There may not be that much demand at a price that yields profit. Demand at current heavily subsidized “the first dose is always free” prices is not a great indicator unless they find some way to make themselves indispensable for a lot of tasks for a lot of people. So far, they haven’t.
"much more demand than supply"? Demand from who?
The demand from middle managers trying to replace their dev teams with Claude Code, mainly.
Please respect other users of hacker news and don’t generate your replies with LLM
FWIW, GP doesn't look like clanker speak to me. It's a bit too smooth and on-point for that.
Anyone who regularly tries to rent GPUs on VPS providers knows that they often sell out. This isn't a market with lots of capacity nobody needs. In the dot.com bubble there was lots of dark fiber nobody was using. In this bubble, almost every high-end GPU is being used fully by someone.
We can use the GPUs for research (64-bit scientific compute), 3d graphics, a few other things. We programmers will reconfigure them to something useful.
At least, the GPUs that are currently plugged in. A lot of this bullshit bubble crap is because most of those GPUs (and RAM) is sitting unplugged in a warehouse, because we don't even have enough power to turn all of them on.
So if your question is how to use a GPU... I got plenty of useful non-AI related ideas. But only if we can plug them in.
I wouldn't be surprised if many of those GPUs are just e-waste, never to turn on due to lack of power.
> 3d graphics
Seems like the G in GPU is very obsolete now:
https://www.tomshardware.com/news/nvidia-h100-benchmarkedin-...
> As it turns out Nvidia's H100, a card that costs over $30,000 performs worse than integrated GPUs in such benchmarks as 3DMark and Red Dead Redemption 2
It’ll be interesting to see what people come up with to get conventional scientific computing workloads to work on 16 bit or smaller data types. I think there’s some hope but it will require work.
> I wouldn't be surprised if many of those GPUs are just e-waste, never to turn on due to lack of power.
That's my fear.
The problem is these GPUs are specifically made for datacenters, So it's not like your average consumer is going to grab one to put into their gaming PCs.
I also worry about what the pop ends up doing to consumer electronics. We'll have a bunch of manufacturers that have a bunch of capacity that they can no longer use to create products which people want to buy and a huge backstop of second hand goods that these liquidated AI companies will want to unload. That will put chip manufactures in a place where they'll need to get their money primarily from consumers if they want to stay in business. That's not the business model that they've operated on up until this point.
We are looking at a situation where we have a bunch of oil derricks ready to pump, but shut off because it's too expensive to run the equipment making it not worth the energy.
That's fine. Server is where we programmers are best at repurposing things. Just a bunch of always on boxes doing random crap in the background.
Servers can (and do!!) use 10+ year old hardware. Consumers are kind of the weird the ones who are so impatient they need the latest and greatest.
I predict there's going to be a niche opening up for companies to recycle the expensive parts of all these compute hardware that AI companies are currently buying and will probably be obsolete/depreciated/replaced in the next 2-5 years. The easiest example is RAM chips. There will be people desoldering those ICs and putting them on DDR5 sticks to resell to the general consumer market.
The government is going to use them.
The flock cameras are going to be fed into them.
The bitcoin network will be crashed.
A technological arms race just occurred in front of your eyes for the past 5 years and you think they're going to let the stockpile fall into civilian hands?
In 2 years the next generation chips will be released and th se chips will be obsolete.
That's truly e-waste. Now in practice, we programmers find uses of 10+ year old hardware as cheap webhosta, compiler/build boxes, Bamboo, unit tests, fuzzers and whatever. So as long as we can turn them on we programmers can and will find a use.
But because we are power constrained, when the more efficient 1.8nm or 1.5nm chips get released (and when those chips use 30% or less power), no one will give a shit about the obsolete stockpile.
I assume even really out of date cards and racks will readily find some use, when the present-day alternative costs ~$100k for a single card. Just have to run them on a low-enough basis that power use is not a significant portion of the overall cost of ownership.
It’s too bad they’re all concentrated in buildings, having been hovered up by the billionaire class.
I would love to live in the world where everyone joins a pool for inference or training, and as such gets the open source weights and models for free.
We could call it: FOSS
> Bubbles are a desirable feature of the American experiment
Wild speculation detached from reality which destroys personal fortunes are not "a desirable feature."
It's only a "desirable feature" to the nihilistic maniacs that run the markets as it's only beneficial to them.
> Wild speculation detached from reality which destroys personal fortunes are not "a desirable feature."
This is not the definition of a bubble, and is specifically contrary to what i said.
A good bubble, like the automobile industry in the example I linked, paves the way for a whole new economic modalit - but value was still destroyed when that bubble popped and the market corrected.
You may think its better to not have bubbles and limit the maximum economic rate of change (and you may be right), but the current system is not obviously wrong and has benefits.
The trouble is, you can only tell what was "detached from reality" after the fact. Real-world bubbles must be credible by definition, or else they would deflate smoothly rather than growing out of control and then popping suddenly when the original expectations are dashed by reality.
> It's only a "desirable feature" to the nihilistic maniacs that run the markets as it's only beneficial to them.
... and which forces do you think are the core concept of "the American experiment"?
I read "Devil Take the Hindmost: A History of Financial Speculation" last year, and the current AI bubble is like getting a front row seat to the next edition being written.
The really stupid bubbles end up getting themselves metastasized into the public retirement system, I'm just waiting for that to start any day now.
Alternative to archive.ph, no Javascript, no CAPTCHAs:
x=www.washingtonpost.com
{
printf 'GET /technology/2026/02/07/ai-spending-economy-shortages/ HTTP/1.1\r\n'
printf 'Host: '$x'\r\n'
printf 'User-Agent: Chrome/115.0.5790.171 Mobile Safari/537.36 (compatible ; Googlebot/2.1 ; +http://www.google.com/bot.html)\r\n'
printf 'X-Forwarded-For: 66.249.66.1\r\n\r\n'
}|busybox ssl_client -n $x $x > 1.htm
firefox ./1.htmIsnt't the FBI trying to shut down archive.today
An anonymous site operator, serving CAPTCHAs to force users to enable Javascript, collecting a potential treasure trove of browsing history, trying to evade authorities with fluctuating IP addresses, an assortment of domain registrations and who knows what other tactics
It's a crowd favorite
Are you accusing archive.today of being a honeypot for the feds because they use Cloudflare? That's a bit much don't you think?
Archive.today don't use Cloudflare, the admin mimics their captcha page because he hates them. He also used to captcha-loop anyone using Cloudflare's DNS resolver because they don't send the IP subnet of clients to upstreams.
I don't think it's a honeypot, though, it's not like he's learning much about me other than I like not paying for news sites.
> Alternative to archive.ph, no Javascript, no CAPTCHAs:
The other tld have been kinder to me (no captcha).
It's hard to comprehend the scale of these investments. Comparing them to notable industrial projects, it's almost unbelievable.
Every week in 2026 Google will pay for the cost of a Burj Khalifa. Amazon for a Wembley Stadium.
Facebook will spend a France-England tunnel every month.
I have been having this conversation more and more with friends. As a research topic, modern AI is a miracle, and I absolutely love learning about it. As an economic endeavor, it just feels insane. How many hospitals, roads, houses, machine shops, biomanufacturing facilities, parks, forests, laboratories, etc. could we build with the money we’re spending on pretraining models that we throw away next quarter?
I have to admit I'm flip-flopping on the topic, back and forth from skeptic to scared enthusiast.
I just made a LLM recreate a decent approximation of the file system browser from the movie Hackers (similar to the SGI one from Jurassic park) in about 10 minutes. At work I've had it do useful features and bug fixes daily for a solid week.
Something happened around newyears 2026. The clients, the skills, the mcps, the tools and models reached some new level of usefulness. Or maybe I've been lucky for a week.
If it can do things like what I saw last week reliably, then every tool, widget, utility and library currently making money for a single dev or small team of devs is about to get eaten. Maybe even applications like jira, slack, or even salesforce or SAP can be made in-house by even small companies. "Make me a basic CRM".
Just a few months ago I found it mostly frustrating to use LLM's and I thought the whole thing was little more than a slight improvement over googling info for myself. But the past week has been mind-blowing.
Is it the beginning of the star trek ship computer? If so, it is as big as the smartphone, the internet, or even the invention of the microchip. And then the investments make sense in a way.
The problem might end up being that the value created by LLMs will have no customers when everyone is unemployed.
Yeah I’m having a similar experience. I’ve been wanting a standard test suite for JMAP email servers, so we can make sure all created jmap servers implement the (somewhat complex) spec in a consistent manner. I spent a single day prompting Claude code on Friday, and walked away with about 9000 lines of code, containing 300 unit tests for jmap servers. And a web interface showing the results. It would have taken me at least a week or two to make something similar by hand.
There’s some quality issues - I think some of the tests are slightly wrong. We went back and forth on some ambiguities Claude found in the spec, and how we should actually interpret what the jmap spec is asking. But after just a day, it’s nearly there. And it’s already very useful to see where existing implementations diverge on their output, even if the tests are sometimes not correctly identifying which implementation is wrong. Some of the test failures are 100% correct - it found real bugs in production implementations.
Using an AI to do weeks of work in a single day is the biggest change in what software development looks like that I’ve seen in my 30+ year career. I don’t know why I would hire a junior developer to write code any more. (But I would hire someone who was smart enough to wrangle the AI). I just don’t know how long “ai prompter” will remain a valuable skill. The AIs are getting much better at operating independently. It won’t be long before us humans aren’t needed to babysit them.
My team of 6 people has been building a software to compete with an already established piece of software written by a major software corporation. I'm not saying we'll succeed, I'm not saying we'll be better nor that we will cover every corner case they do and that they learned over the past 30 years. But 6 senior devs are getting stuff done at an insane pace. And if we can _attempt_ to do this, which would have been unthinkable 2 years ago, I can only wonder what will happen next.
Yeah I’m curious how much the moat of big software companies will shrink over the next few years. How long before I can ask a chatbot to build me a windows-like OS from scratch (complete with an office suite) and it can do a reasonable job?
And what happens then? Will we stop using each others code?
I agree with you, and share the experience. Something changed recently for me as well, where I found the mode to actually get value from these things. I find it refreshing that I don't have to write boilerplate myself or think about the exact syntax of the framework I use. I get to think about the part that adds value.
I also have the same experience where we rejected a SAP offering with the idea to build the same thing in-house.
But... aside from the obvious fact that building a thing is easier than using and maintaining the thing, the question arose if we even need what SAP offered, or if we get agents to do it.
In your example, do you actually need that simple CRM or maybe you can get agents to do the thing without any other additional software?
I don't know what this means for our jobs. I do know that, if making software becomes so trivial for everyone, companies will have to find another way to differentiate and compete. And hopefully that's where knowledge workers come in again.
Exactly. I hear this "wow finally I can just let Claude work on a ticket while I get coffee!" stuff and it makes me wonder why none of these people feel threatened in any way?
And if you can be so productive, then where exactly do we need this surplus productivity in software right now when were no longer in the "digital transformation" phase?
I don't feel threatened because no matter how tools, platforms and languages improved, no matter how much faster I could produce and distribute working applications, there has never been a shortage of higher level problems to solve.
Now if the only thing I was doing was writing code to a specification written by someone else, then I would be scared, but in my quarter century career that has never been the case. Even at my first job as a junior web developer before graduating college, there was always a conversation with stakeholders and I always had input on what was being built. I get that not every programmer had that experience, but to me that's always been the majority of the value that software developers bring, the code itself is just an implementation detail.
I can't say that I won't miss hand-crafting all the code, there certainly was something meditative about it, but I'm sure some of the original ENIAC programmers felt the same way about plugging in cables to make circuits. The world of tech moves fast, and nostalgia doesn't pay the bills.
Smart devs know this is the beginning of the end of high paying dev work. Once the LLM's get really good, most dev work will go to the lowest bidder. Just like factory work did 30 years ago.
Not many. Money is not a perfect abstraction. The raw materials used to produce 100B worth of Nvidia chips will not yield you many hospitals. AI researcher with 100M singup bonus from Meta ain't gonna lay you much brick.
FWIW the models aren't thrown away. The weights are used to preinit the next foundation model training run. It helps to reuse weights rather than randomize them even if the model has a somewhat different architecture.
As for the rest, constraint on hospital capacity (at least in some countries, not sure about the USA) isn't money for capex, it's doctors unions that restrict training slots.
There is a certain logic to it though. If the scaling approaches DO get us to AGI, that's basically going to change everything, forever. And if you assume this is the case, then "our side" has to get there before our geopolitical adversaries do. Because in the long run the expected "hit" from a hostile nation developing AGI and using it to bully "our side" probably really dwarfs the "hit" we take from not developing the infrastructure you mentioned.
Any serious LLM user will tell you that there's no way to get from LLM to AGI.
These models are vast and, in many ways, clearly superhuman. But they can't venture outside their training data, not even if you hold their hand and guide them.
Try getting Suno to write a song in a new genre. Even if you tell it EXACTLY what you want, and provide it with clear examples, it won't be able to do it.
This is also why there have been zero-to-very-few new scientific discoveries made by LLM.
Can most people venture outside their training data?
Are you seriously comparing chips running AI models and human brains now???
Last time I checked the chips are not rewiring themselves like the brain does, nor does even the software rewrite itself, or the model recalibrate itself - anything that could be called "learning", normal daily work for a human brain.
Also, the models are not models of the world, but of our text communication only.
Human brains start by building a model of the physical world, from age zero. Much later, on top of that foundation, more abstract ideas emerge, including language. Text, even later. And all of it on a deep layer of a physical world model.
The LLM has none of that! It has zero depth behind the words it learned. It's like a human learning some strange symbols and the rules governing their appearance. The human will be able to reproduce valid chains of symbols following the learned rules, but they will never have any understanding of those symbols. In the human case, somebody would have to connect those symbols to their world model by telling them the "meaning" in a way they can already use. For the LLM that is not possible, since it doesn't habe such a model to begin with.
How anyone can even entertain the idea of "AGI" based on uncomprehending symbol manipulation, where every symbol has zero depth of a physical world model, only connections to other symbols, is beyond me TBH.
I mean yeah, but that's why there are far more research avenues these days than just pure LLMs, for instance world models. The thinking is that if LLMs can achieve near-human performance in the language domain then we must be very close to achieving human performance in the "general" domain - that's the main thesis of the current AI financial bubble (see articles like AI 2027). And if that is the case, you still want as much compute as possible, both to accelerate research and to achieve greater performance on other architectures that benefit from scaling.
How does scaling compute does not go hand-in-hand with energy generation? To me, scaling one and not the other puts a different set of constraints on overall growth. And the energy industry works at a different pace than these hyperscalars scaling compute.
The other thing here is we know the human brain learns on far less samples than LLMs in their current form. If there is any kind of learning breakthrough then the amount of compute used for learning could explode overnight
Here's hoping you are chinese, then.
Well, I tried to specifically frame it in a neutral way, to outline the thinking that pretty much all the major nations / companies currently have on this topic.
Remember the good old days of complaining about Bitcoin taking the energy output of a whole town.
It has never _not_ been time to build all the power plants we can environmentally afford.
More power enables higher quality of living and more advanced civilization. It will be put to use doing something useful, or at the very worst it'll make doing existing useful things less expensive opening them up to more who would like those things.
> It has never _not_ been time to build all the power plants we can environmentally afford.
The US' challenge is that new energy sources are waiting 3-5 years before they can connect to the grid.
refs: https://kagi.com/search?q=how+long+are+new+energy+sources+wa...
I'm a simple man, I just want these companies to pay taxes where they make money.
> I'm a simple man, I just want these companies to pay taxes where they make money.
The folks who bankroll elections work tirelessly to insure this doesn't happen.
Haters will say Sora wasn't worth it.
Incredible how quickly that moment passed. Four months on it's barely clinging to the App Store top 100, below killer apps such as Gossip Harbor®: Merge & Story.
What is Sora?
One of many AI video generators that are filling up all video social media with rage bate garbage.
Inflation adjusted?
Seems to be.
Facebook's investment is to be $135bn [1]. The Channel Tunnel was "in 1985 prices, £4.65 billion", which is £15.34bn in December 2025 [3], or $20.5bn at current exchange rate.
That is staggering.
(I didn't check the other figures.)
[1] https://www.bbc.com/news/articles/cn8jkyk78gno
[2] https://en.wikipedia.org/wiki/Channel_Tunnel
[3] https://www.bankofengland.co.uk/monetary-policy/inflation/in...
Not really your point but I think the skills to create these things are much slower to train than producing chips and data centres.
So they couldn't really build any of these projects weekly since the cost of construction materials / design engineers / construction workers would inflate rapidly.
Worth keeping in mind when people say "we could have built 52 hospitals instead!" or similar. Yes, but not really... since the other constraints would quickly reveal themselves
It's incredibly sad and depressing. We could be building green energy, parks, public transit, education, healthcare.
It's caused a massive shortage of interesting content that isn't related to AI.
The best part is every single thread on HN now has someone accusing the author of using AI.
The real question is whether the boom is, economically, a mistake.
If AI is here to stay, as a thing that permanently increases productivity, then AI buying up all the electricians and network engineers is a (correct) signal. People will take courses in those things and try to get a piece of the winnings. Same with those memory chips that they are gobbling up, it just tells everyone where to make a living.
If it's a flash in a pan, and it turns out to be empty promises, then all those people are wasting their time.
What we really want to ask ourselves is whether our economy is set up to mostly get things right, or it is wastefully searching.
"If X is here to stay, as a thing that permanently increases productivity" - matches a lot of different X. Maintaining persons health increases productivity. Good education increases productivity. What is playing out now is completely different - it is both irresistible lust for omniscient power provided by this technology ("mirror mirror on the wall, who has recently thought bad things about me?"), and the dread of someone else wielding it.
Plus, it makes natural moat against masses of normal (i.e. poor) people, because requires a spaceship to run. Finally intelligence can also be controlled by capital the way it was meant to, joining information, creativity, means of production, communication and such things
> Plus, it makes natural moat against masses of normal (i.e. poor) people, because requires a spaceship to run. Finally intelligence can also be controlled by capital the way it was meant to, joining information, creativity, means of production, communication and such things
I'd put intelligence in quotes there, but it doesn't detract from the point.
It is astounding to me how willfully ignorant people are being about the massive aggregation of power that's going on here. In retrospect, I don't think they're ignorant, they just haven't had to think about it much in the past. But this is a real problem with very real consequences. Sovereignty must be occasionally be asserted, or someone will infringe upon it.
That's exactly what's happening here.
>massive aggregation of power that's going on here
Which has been happening since what at least the bad old IBM days and nobody's done a thing about it?
I've given up tbh. It's like the apathetic masses want the billionaires to become trillionaires as long as they get their tiktok fix.
> It's like the apathetic masses want the billionaires to become trillionaires as long as they get their tiktok fix.
it's much worse. a great demographic of hacker news love gen. AI.. these are usually highly educated people showing their true faces on the plethora of problems this technology violates and generates
>I've given up tbh. It's like the apathetic masses want the billionaires to become trillionaires as long as they get their tiktok fix.
Especially at cost of diverting power and water for farmers and humans who need them. And the benefit of the AI seems quite limited from recent Signal post here on HN.
Water for farmers is its own pile of bullshit. Beef uses a stupid amount of water. Same with almonds. If you're actually worried about feeding people and not just producing an expensive economic product you're not going to make them.
Same goes for people living in deserts where we have to ship water thousands of miles.
Give me a break.
The difference is that we've more or less hit a stable Pareto front in education and healthcare. Gains are small and incremental; if you pour more money into one place and less into another, you generally don't end up much better off, although you can make small but meaningful improvements in select areas. You can push the front forward slightly with new research and innovation, but not very fast or far.
The current generation of AI is an opportunity for quick gains that go beyond just a few months longer lifespan or a 2% higher average grade. It is an unrealised and maybe unrealistic opportunity, but it's not just greed and lust for power that pushes people to invest, it's hope that this time the next big thing will make a real difference. It's not the same as investing more in schools because it's far less certain but also has a far higher alleged upside.
> The difference is that we've more or less hit a stable Pareto front in education and healthcare.
Not even close. So many parts of the world need to be pumped with target fund infusions ASAP. Only forcing higher levels of education and healthcare at the places where it lags is a viable step towards securing peaceful and prosperous nearest future.
Then why didn't that happen before GenAI was a thing?
I think some people may have to face the fact that money was never going to go there under any circumstances.
> Then why didn't that happen before GenAI was a thing?
Because there was no easy way for the people directing capital to those endeavors to make themselves richer.
Pareto is irrelevant, because they are talking about how to use all of this money not currently used in healthcare or education.
> if you pour more money into one place and less into another, you generally don't end up much better off, although you can make small but meaningful improvements in select areas
"Marginal cost barrier" hit, then?
> The difference is that we've more or less hit a stable Pareto front in education and healthcare. Gains are small and incremental;
You probably mean gains between someone receiving healtcare and education now, as compared to 10 years ago, or maybe you mean year to year average across every man alive.
You certainly do not mean that person receiving appropriate healthcare is only 2% better off than one not receiving it, or educated person is noly 2% better of than an uneducated one?
Because I find such notion highly unlikely. So, here you have vast amounts of people you can mine for productivity increase, simply by providing things that exist already and are available in unlimited supply to anyone who can produce money at will. Instead, let's build warehouses and fill them with obsolete tech, power it all up using tiny Sun and .. what exactly?
This seems like a thinly disguised act of an obsessed person that will stop at nothing to satisfy their fantasies.
> Finally intelligence can also be controlled by capital
The relationship between capital and AI is a fascinating topic. The contemporary philosopher who has thought most intensely about this is probably Nick Land (who is heavily inspired by Eugen von Böhm-Bawerk and Friedrich Hayek). For Land, intelligence has always been immanent in capitalism and capitalism is actively producing it. As we get closer to the realization of capitalism's telos/attractor (technological singularity), this becomes more and more obvious (intelligible).
In 2024, global GDP was $111 trillion.[1] Investing 1 or 2 % of that to improve global productivity via AI does not seem exaggerated to me.
2% is a lot! There's only fifty things you can invest 2% of GDP in before you occupy the entire economy. But the list of services people need, from food, water, shelter, heating, transportation, education, healthcare, communications, entertainment, mining, materials, construction, research, maintenance, legal services... there's a lot of things on that list. To allocate each one 1% or 2% of the economy may seem small, but pretty quickly you hit 100%.
Most of you have mentioned is not investment, but consumption. Investments means to use money to make more money in the future. Global investment rates are around 25 % of global GDP. Avarage return on investement ist about 10% per year. In other words: using 1% or 2% of GDP if its leads to an improvement in GDP of more than 0.1% or 0.2% next year would count as a success. I think to expect a productivity gain on this scale due to AI is not unrealistic for 2026.
I will put it differently,
Investing 1 or 2% of global GDP to increase wealth gap 50% more and make top 1% unbelievable rich while everyone else looking for jobs or getting 50 year mortgage, seem very bad idea to me.
It can be both? Both that inequality increases but also prosperity for the lower class? I don’t mind that trade off.
If some one were to say to you - you can have 10,000 more iPhones to play with but your friends would get 100,000 iPhones, would you reject the deal?
A century ago people in the US started to tax the rich much more heavily than we do now. They didn't believe that increasing inequality was necessary - or even actually that helpful - for improving their real livelihood.
Don't be shocked if that comes back. (And that was the mild sort of reaction.)
If you have billions and all the power associated with it, why are you shooting for personal trillions instead of actually, directly improving the day to day for everyone else without even losing your status as an elite, just diminishing it by a little bit of marginal utility? Especially if you read history about when people make that same decision?
I don't think that is scalable to infinite iphones since the input materials are finite. If your all your friends get 100,000 iphones and then you need an ev battery and that now costs 20,000 iphones now and you're down 5k iphones if the previous battery cost was 5k iphones. On the other hand if you already had a good battery, then you're up 20k iphones or so in equity. Also, since everyone has so many iphones the net utility drops and they become worth less than the materials so everyone would have to scrap their iphones to liquidate at the cost of the recycled metals.
It can be, but there are lots of reasons to believe it will not be. Knowledge work was the ladder between lower and upper classes. If that goes away, it doesn't really matter if electricians make 50% more.
I guess I don’t believe knowledge work will completely go away.
Its not really a matter of some great shift. Millennials are the most educated generation by a wide margin, yet their wealth by middle age is trailing prior generations. The ladder is being pulled up inch by inch and I don't see AI doing anything other than speeding up that process at the moment.
>Both that inequality increases but also prosperity for the lower class? I don’t mind that trade off.
This sounds like it is written from the perspective of someone who sees their own prosperity increase dramatically so that they end up on the prosperous side of the worsening inequality gap. The fact that those on the other side of the gap see marginal gains in prosperity makes them feel that it all worked out okay for everyone.
I think this is greed typical of the current players in the AI/tech economy. You all saw others getting abundant wealth by landing high-paying jobs with tech companies and you want to not only to do the same, but to one-up your peers. It's really a shame that so much tech-bro identity revolves around personal wealth with zero accountability for the tools that you are building to set yourselves in control of lives of those you have chosen to either leave behind or to wield as tools for further wealth creation through alternate income SaaS subscription streams or other bullshit scams.
There really is not much difference between tech-bros, prosperity gospel grifters or other religious nuts whose only goal is to be more wealthy today than yesterday. It's created a generation of greedy, selfish narcissists who feel that in order to succeed in their industry, they need to be high-functioning autists so they take the path of self-diagnosis and become, as a group, resistant to peer review since anyone who would challenge their bullshit is doing the same thing and unlikely to want too much light shed on their own shady shit. It is funny to me that many of these tech-bros have no problem admitting their drug experimentation since they need to maintain an aura of enlightenment amongst their peers.
It's gonna be a really shitty world when the dopeheads run everything. As someone who grew up back in the day when smoking dope was something hidden and paranoia was a survival instinct for those who chose that path I can see lots of problems for society in the pipeline.
I think you inadvertently stepped in the point — Yes, what the fuck do I need 10,000 iPhones for? Also part of the problem is which resources end up in abundance. What am I going to do with more compute when housing and land are a limited resource.
Gary’s Economics talks about this but in many cases inequality _is_ the problem. More billionaires means more people investing in limited resources(housing) driving up prices.
Maybe plebes get more money too, but not enough to spend on the things that matter.
It’s just a proxy for wealth using concrete things.
If you were given 10,000 dollars but your friends were also 100,000 dollars as well, would you take the deal?
Land and housing can get costlier while other things get cheaper, making you overall more prosperous. This is what happened in the USA and most of the world. Would you take this deal?
I wouldn't be able to hang out with them as much (they'd go do a lot of higher-cost things that I couldn't afford anymore).
I'd have a shittier apartment (they'd drive up the price of the nicer ones, if we're talking about a significant sized group; if it's truly just immediate friends, then instead it's just "they'd all move further away to a nicer area").
So I'd have some more toys but would have a big loss in quality of my social life. Pass.
(If you promised me that those cracks wouldn't happen, sure, it would be great for them. But in practice, having seen this before, it's not really realistic to hold the social fabric together when economic inequality increases rapidly and dramatically.)
> If you were given 10,000 dollars but your friends were also 100,000 dollars as well, would you take the deal?
This boldly assumes the 10k actually reaches me. Meanwhile 100k payouts endlessly land as expected.
usa sources: me+kids+decade of hunger level poverty. no medical coverage for decades. homeless retirement still on the table.
More to the point, what does research into notions of fairness among primates tell us about the risks of a vast number of participants deciding to take this deal?
You have to tell us the answer so we can resolve your nickname "simianwords" with regard to Poe's Law.
I don't know how nobody has mentioned this before:
The guy with 100k will end up rewriting the rules so that in the next round, he gets 105k and you get 5k.
And people like you will say "well, I'm still better off"
In future rounds, you will try to say "oh, I can't lose 5k for you to get 115k" and when you try to vote, you won't be able to vote, because the guy who has been making 23x what you make has spent his money on making sure it's rigged.
You’re missing the point. It’s not about jealousy it’s basic economics - supply and demand. No I would not take the deal if it raised the demand in something central to my happiness (housing) driving the price up for something previously affordable and make it unaffordable.
I would not trade my house for a better iPhone with higher quality YouTube videos, and slightly more fashionable athleisure.
I don’t care how many yacht’s Elon Musk has, I care how many governments.
What if you could buy the same house as before, buy the same iPhone as before and still have more money remaining? But your house cost way way more proportionally.
If you want to claim that that's a realistic outcome you should look at how people lived in the 50s or 80s vs today, now that we've driven up income inequality by dramatically lowering top-end tax rates and reduced barriers to rich people buying up more and more real property.
What we got is actually: you can't buy the same house as before. You can buy an iPhone that didn't exist then, but your boss can use it to request you do more work after-hours whenever they want. You have less money remaining. You have less free time remaining.