Furiosa: 3.5x efficiency over H100s
furiosa.ai180 points by written-beyond 11 hours ago
180 points by written-beyond 11 hours ago
I am of the opinion that Nvidia's hit the wall with their current architecture in the same way that Intel has historically with its various architectures - their current generation's power and cooling requirements are requiring the construction of entirely new datacenters with different architectures, which is going to blow out the economics on inference (GPU + datacenter + power plant + nuclear fusion research division + lobbying for datacenter land + water rights + ...).
The story with Intel around these times was usually that AMD or Cyrix or ARM or Apple or someone else would come around with a new architecture that was a clear generation jump past Intel's, and most importantly seemed to break the thermal and power ceilings of the Intel generation (at which point Intel typically fired their chip design group, hired everyone from AMD or whoever, and came out with Core or whatever). Nvidia effectively has no competition, or hasn't had any - nobody's actually broken the CUDA moat, so neither Intel nor AMD nor anyone else is really competing for the datacenter space, so they haven't faced any actual competitive pressure against things like power draws in the multi-kilowatt range for the Blackwells.
The reason this matters is that LLMs are incredibly nifty often useful tools that are not AGI and also seem to be hitting a scaling wall, and the only way to make the economics of, eg, a Blackwell-powered datacenter make sense is to assume that the entire economy is going to be running on it, as opposed to some useful tools and some improved interfaces. Otherwise, the investment numbers just don't make sense - the gap between what we see on the ground of how LLMs are used and the real but limited value add they can provide and the actual full cost of providing that service with a brand new single-purpose "AI datacenter" is just too great.
So this is a press release, but any time I see something that looks like an actual new hardware architecture for inference, and especially one that doesn't require building a new building or solving nuclear fusion, I'll take it as a good sign. I like LLMs, I've gotten a lot of value out of them, but nothing about the industry's finances add up right now.
> I am of the opinion that Nvidia's hit the wall with their current architecture
Based on what?
Their measured performance on things people care about keep going up, and their software stack keeps getting better and unlocking more performance on existing hardware
Inference tests: https://inferencemax.semianalysis.com/
Training tests: https://www.lightly.ai/blog/nvidia-b200-vs-h100
https://newsletter.semianalysis.com/p/mi300x-vs-h100-vs-h200... (only H100, but vs AMD)
> but nothing about the industry's finances add up right now
Is that based just on the HN "it is lots of money so it can't possibly make sense" wisdom? Because the released numbers seem to indicate that inference providers and Anthropic are doing pretty well, and that OpenAI is really only losing money on inference because of the free ChatGPT usage.
Further, I'm sure most people heard the mention of an unnamed enterprise paying Anthropic $5000/month per developer on inference(!!) If a company if that cost insensitive is there any reason why Anthropic would bother to subsidize them?
> Their measured performance on things people care about keep going up, and their software stack keeps getting better and unlocking more performance on existing hardware
I'm more concerned about fully-loaded dollars per token - including datacenter and power costs - rather than "does the chip go faster." If Nvidia couldn't make the chip go faster, there wouldn't be any debate, the question right now is "what is the cost of those improvements." I don't have the answer to that number, but the numbers going around for the costs of new datacenters doesn't give me a lot of optimism.
> Is that based just on the HN "it is lots of money so it can't possibly make sense" wisdom?
OpenAI has $1.15T in spend commitments over the next 10 years: https://tomtunguz.com/openai-hardware-spending-2025-2035/
As far as revenue, the released numbers from nearly anyone in this space are questionable - they're not public companies, we don't actually get to see inside the box. Torture the numbers right and they'll tell you anything you want to hear. What we _do_ get to see is, eg, Anthropic raising billions of dollars every ~3 months or so over the course of 2025. Maybe they're just that ambitious, but that's the kind of thing that makes me nervous.
> OpenAI has $1.15T in spend commitments over the next 10 years
Yes, but those aren't contracted commitments, and we know some of them are equity swaps. For example "Microsoft ($250B Azure commitment)" from the footnote is an unknown amount of actual cash.
And I think it's fair to point out the other information in your link "OpenAI projects a 48% gross profit margin in 2025, improving to 70% by 2029."
> "OpenAI projects a 48% gross profit margin in 2025, improving to 70% by 2029."
OpenAI can project whatever they want, they're not public.
The fact that there's an incestual circle between OpenAI, Microsoft, NVidia, AMD, etc.. where they provide massive promises to each other for future business is nothing short of hilarious.
The economics of the entire setup are laughable and it's obvious that it's a massive bubble. The profit that'd need to be delivered to justify the current valuations is far beyond what is actually realistic.
What moat does OpenAI have? I'd argue basically none. They make extremely lofty forecasts and project an image of crazy growth opportunities, but is that going to ever survive the bubble popping?
I still don't really understand this "circle" issue. If I fix your bathroom and in return you make me a new table, is that an incestuous circle? Haven't we both just exchanged value?
The circle allows you to put an arbitrary "price" on those services. You could say that the bathroom and table are $100 each, so your combined work was $200. Or you could claim that each of you did $1M work. Without actual money flowing in/out of your circle, your claims aren't tethered to reality.
> Yes, but those aren't contracted commitments, and we know some of them are equity swaps.
It's worse than not contracted. Nvidia said in their earnings call that their OpenAI commitment was "maybe".
GPUs are supply constrained and price isn't declining that fast so why do you expect the token price price to decrease. I think the supply issue will resolve in 1-2 years as now they have good prediction of how fast the market would grow.
Nvidia is literally selling GPUs with 90% profit margin and still everything is out of stock, which is unheard of before.
>Further, I'm sure most people heard the mention of an unnamed enterprise paying Anthropic $5000/month per developer on inference
Companies have wasted more money on dumber things so spending isn't a good measure.
And what about the countless other AI companies? Anthropic has one of the top models for coding so that's like saying there ins't a problem pre dot com bubble because Amazon is doing fine.
The real effects of AI is measured in rising profit of the customers of those AI companies otherwise you're looking at the shovel sellers
> Is that based just on the HN "it is lots of money so it can't possibly make sense" wisdom?
I mean the amount of money invested across just a handful of AI companies is currently staggering and their respective revenues are no where near where they need to be. That’s a valid reason to be skeptical. How many times have we seen speculative investment of this magnitude? It’s shifting entire municipal and state economies in the US.
OpenAI alone is currently projected to burn over $100 billion by what? 2028 or 2029? Forgot what I read the other day. Tens of billions a year. That is a hell of a gamble by investors.
The flip side is that these companies seem to be capacity constrained (although that is hard to confirm). If you assume the labs are capacity constrained, which seems plausible, then building more capacity could pay off by allowing labs to serve more customers and increase revenue per customer.
This means the bigger questions are whether you believe the labs are compute constrained, and whether you believe more capacity would allow them to drive actual revenue. I think there is a decent chance of this being true, and under this reality the investments make more sense. I can especially believe this as we see higher-cost products like Claude Code grow rapidly with much higher token usage per user.
This all hinges on demand materialising when capacity increases, and margins being good enough on that demand to get a good ROI. But that seems like an easier bet for investors to grapple with than trying to compare future investment in capacity with today's revenue, which doesn't capture the whole picture.
I am not someone who would ever be ever be considered an expert on factories/manufacturing of any kind, but my (insanely basic) understanding is that typically a “factory” making whatever widgets or doodads is outputting at a profit or has a clear path to profitability in order to pay off a loan/investment. They have debt, but they’re moving towards the black in a concrete, relatively predictable way - no one speculates on a factory anywhere near the degree they do with AI companies currently. If said factory’s output is maxed and they’re still not making money, then it’s a losing investment and they wouldn’t expand.
Basically, it strikes me as not really apples to apples.
Consensus seems to be that the labs are profitable on inference. They are only losing money on training and free users.
The competition requiring them to spend that money on training and free users does complicate things. But when you just look at it from an inference perspective, looking at these data centres like token factories makes sense. I would definitely pay more to get faster inference of Opus 4.5, for example.
This is also not wholly dissimilar to other industries where companies spend heavily on R&D while running profitable manufacturing. Pharma semiconductors, and hardware companies like Samsung or Apple all do this. The unusual part with AI labs is the ratio and the uncertainty, but that's a difference of degree, not kind.
> But when you just look at it from an inference perspective, looking at these data centres like token factories makes sense.
So if you ignore the majority of the costs, then it makes sense.
Opus 4.5 was released on November 25, 2025. That is less than 2 months ago. When they stop training new models, then we can forget about training costs.
I'm not taking a side here - I don't know enough - but it's an interesting line of reasoning.
So I'll ask, how is that any different than fabs? From what I understand R&D is absurd and upgrading to a new node is even more absurd. The resulting chips sell for chump change on a per unit basis (analogous to tokens). But somehow it all works out.
Well, sort of. The bleeding edge companies kept dropping out until you could count them on one hand at this point.
At first glance it seems like the analogy might fit?
Someone else mentioned it elsewhere in this thread, and I believe this is the crux of the issue: this is all predicated in the actual end users finding enough benefit in LLM services to keep the gravy train going. It's irrelevant how scalable and profitable the shovel makes are, to keep this business afloat long term, the shovelers - ie the end users - have to make money using the shovesl. Those expectations are currently ridiculously inflated. Far beyond anything in the past.
Invariably, there's going to be a collapse in the hype, the bubble will burst, and an investment deleveraging will remove a lot of money from the space in a short period of time. The bigger the bubble, the more painful and less survivable this event will be.
Inference costs scale linearly with usage. R&D expenses do not.
That's not to mention that Dario Amodei has said that their models actually have a good return, even when accounting for training costs [0].
> Inference costs scale linearly with usage. R&D expenses do not.
Do we know this is true for AI?