Anthropic expands partnership with Google and Broadcom for next-gen compute

anthropic.com

262 points by l1n 16 hours ago


skybrian - 13 hours ago

I guess gigawatts is how we roughly measure computing capacity at the datacenter scale? Also saw something similar here:

> Costs and pricing are expressed per “token”, but the published data immediately seems to admit that this is a bad choice of unit because it costs a lot more to output a token than input one. It seems to me that the actual marginal quantity being produced and consumed is “processing power”, which is apparently measured in gigawatt hours these days. In any case, I think more than anything this vindicates my original decision not to get too precise. [...]

https://backofmind.substack.com/p/new-new-rules-for-the-new-...

Is it priced that way, though? I assume next-gen TPU's will be more efficient?

ketzo - 12 hours ago

$19B -> $30B annualized revenue in a month?

Feels like the lede is buried here!

cebert - 13 hours ago

I’m surprised Anthropic wanted to partner with Broadcom when they have such a negative reputation with antics such as their VMWare acquisition.

chimpanzee2 - 5 hours ago

On a tangential note: It seems the whole theater with the DoD is over for now, am I seeing this right?

mahadillah-ai - 11 hours ago

Interesting to see Anthropic investing in compute infrastructure. The bottleneck I keep hitting is not raw compute but where that compute lives — EU customers increasingly need guarantees their data stays in-region. More sovereign compute options in Europe would unlock a lot of enterprise AI adoption.

nopurpose - 5 hours ago

How is compute shortage to satisfy demand manifested? Obviously they never close sign-ups, so only option is to extended queues? But if demand grows like crazy, then queues should get longer, yet my pro claude plan seems snappy with only occasional retries due to 429.

NeoBild - 3 hours ago

Interesting timing given the quantum computing timeline pressure from this week's cryptography discussions. $30B run-rate and gigawatts of TPU capacity — and meanwhile the most interesting AI work I've seen lately runs on a phone in Termux with no cloud dependency at all. Both things are true simultaneously.

car - 5 hours ago

TPU architecture explained

https://news.ycombinator.com/item?id=47637597

Eufrat - 14 hours ago

Can someone explain why everything is being marketed in terms of power consumption?

holografix - 10 hours ago

I don’t understand Claude Code’s moat here. What can it do that opencode can’t or couldn’t fairly easily implement?

mikert89 - 14 hours ago

There's no limit to the algorithms. People dont understand yet. They can learn the whole universe with a big enough compute cluster. We built a generalizable learning machine

edinetdb - 9 hours ago

[flagged]

enesz - 4 hours ago

[dead]

gausswho - 13 hours ago

[dead]