Google plans to invest up to $40B in Anthropic
bloomberg.com624 points by elffjs 20 hours ago
624 points by elffjs 20 hours ago
Context: a few weeks ago, Anthropic signed a deal to buy "multiple gigawatts of next-generation TPU capacity" from Google and Broadcom [1]. There have been several previous deals, too.
Some people call this sort of thing a "circular deal", but perhaps a better way to think of it is as a very large-scale version of vendor financing? The simple version of vendor financing is when a vendor gives a retailer time to pay for goods they purchased for resale. This is effectively a loan that's backed by the retailer's ability to resell the goods. There's a possibility that the retailer goes broke and doesn't pay, but the vendor has insight into how well the retailer is doing, so they know if they're a good risk.
Similarly, Google likely knows quite a lot about Anthropic because Anthropic buys computing services from Google for resale. They're making an equity investment rather than a loan, but the money will be coming back to Google, assuming Anthropic's sales continue to rise as fast as they have been.
Also, if you own Google stock, some small part of that is an investment in Anthropic?
[1] https://www.anthropic.com/news/google-broadcom-partnership-c...
So yes, but that doesn't negate the circular investment aspect, for most intents and purposes.
The risk is from this structure is mostly to do with how this affects market cap. Companies using the value of their shares to fund demand for their services.
That's a risk.
I feel like the whole market at this point is just AI since big tech other than Apple are all massively invested into that. Everyone owns either the S&P or the total world ETF which are both heavily skewed towards big tech and this trade - so literally everybody is in it. It might go well for a few more quarters/years but once something breaks or gets exponentially cheaper this will take down the whole market with it.
It's just hard to tell the difference between "real" demand and "circular." That's the concern.
PG had an essay about this during the dotcom, when he worked at yahoo. Iirc...Yahoo's share price and other big successes in the space attracted investment into startups. Startups used that money to advertise on yahoo. Yahoo bought some of these the startups.
So... a lot of the revenue used to analyze companies for investment was actually a 2nd order side effect of these investments.
Here the risk is that we have Ai investments servicing Ai investments for other Ai investments.
Google buys Nvidia chips to sell anthropic compute. Anthropic sells coding assist to Ai companies (including Google and Nvidia). They buy anthropic services with investor money that is flowing because of all this hype.
Imo the general risk factor is trying to get ahead of actual worldly use.
The Ai optimists have a sense that Ai produces things that are valuable (like software) at massive scale...that is output.
But... even if true, it will take a lot of time, and lot of software for the Econony to discover this, go through the path dependencies and actually produce value.
The most valuable, known software has already afy been written. The stuff that you could do, but haven't yet is stuff that hasn't made the cut. Value isn't linear.
I'm starting to transition how we build software at our company due to the power of AI. No more: five code monkey contractors under a lead. Two top-notch devs are all that is needed now, unrestrained by sprints and mindless ceremonies. There is going to be a giant sucking sound in India.
I can't continue the current model. The dev that gets AI is done in five hours, the ones that don't are thrashing for the next two weeks. I have to unleash the good AI dev. I have the Product team handing us markdown files now with an overview of the project and all the details and stories built into them. I'm literally transforming how a billion dollar company works right now because of this. I have Codex, Claude and GitHub Copilot enterprise accounts on top of Office 365. Everyone is being trained right now as most devs are behind, even.
> literally everybody
I personally make sure I really diversify, so that when I buy funds, I buy those with stocks of EU companies which pay dividends. AFAICT there are 0 European AI companies that pay dividends.
>Companies using the value of their shares to fund demand for their services.
That's not what's happening here though. Google isn't using the value of its shares to fund demand. Google is using its own cash flow to fund this demand from Anthropic.
The question is whether Anthropic has demand from end users for the capacity they are buying from Google (that's a yes I guess) and whether that demand is profitable for Anthropic (that's a question mark).
True.
Regardless, (a) it's ability/desire to make such investments is still driven by stock-driven optimism and (b) these transactions' "signal" can have a similar, warping effect.
In this case the transaction creates demand for Google's services and also funds anthropic's growth... which represents demands for google's services.
"Loop" is an approximation of an analogy. The risk is that enough of such transactions create a dynamic that distorts feedbacks.
>(a) it's ability/desire to make such investments is still driven by stock-driven optimism
I don't think it has much to do with the stock price at all. Current platform oligopolists fear the rise of new platforms. They want a foot in the door for strategic reasons.
What could happen is that frontier labs like Anthropic and OpenAI never become platforms and turn out to be providers of a largely commoditised, low margin service.
In that event, current valuations are too high. But Anthropic's valuation doesn't seem extreme to me. Their $30bn annual run rate is valued at $380bn.
Given this price and Anthropic's strategic value, Google's investment seems reasonable.
But OpenAI/Anthropic are not selling the compute as they're buying that from Google/Amazon/etc.
So they're selling the transformation, or the model. Or the ability to make a model. And their brand and their harness.
And it seems like the model is definitely not worth 380 billion. Models depreciate incredibly fast. There are lots of models and the other models aren't that far behind.
And it seems like the harness is not worth much as there's already open source alternatives that people claim are better.
And all these companies are paying lots of money for these AI training experts.
But I suspect that any regular Hacker News reader of 10 years dev experience could become a training expert in months if allowed to play with a load of compute and a lot of data for a bit.
Just like any of us could have become a data scientist, this stuff is not particularly hard. Random horny dudes on the internet are putting out loras and quantized models in days against the open source image models.
So what's worth 380 billion exactly? The brand?
These valuations just look really off. Not by one order of magnitude, but more like by 4 orders of magnitude. Like 380 million might be a reasonable valuation, but not billion.
What I also don't get is that it's pretty obvious to me that the Europeans should all be spinning up their own, not necessarily massive, data centers and throwing a few billion at some guys in Cambridge or Stockholm or London or Berlin to make their own AI models.
Only the French have done it.
But instead the rest seem to be trying to court Anthropic or OpenAI to build data centers. Which is just stupid politics given what's happening in the world right now.
The tech industry goes through investment phases to produce oligopolies it turns around and enshittifies, parasitizing income off what it has built. Venture capital, acquisitions, acquihires, circular investments - It’s been incestuous for years. The question is whether competition from China’s sophisticated tech sector, which already surpasses the US in many areas, will put a pin in these plans this time round.
To be honest, I think "vendor financing" is still a very risky premise.
Vendors may be positioned to know how a customer is doing, but they're also incentivized to overestimate how well a customer is going to perform.
GE Capital (edit: and GMCA) is a great example of how seemingly reasonable vendor financing can cause the lender serious problems.
> To be honest, I think "vendor financing" is still a very risky premise.
Are you aware that all heavy industry in all highly developed nations make extensive use of vendor financing to sell their products? Siemens is a perfect example of a well-run, stable, industrial giant. They offer vendor financing for large purchases. Same for the "heavies" (Mitsubishi, Kawasaki, IHI, Hyundai, Doosan, Hanjin) in Japan and Korea.If anyone is interested to learn about the damage that the financialisation of General Electric (USA) brought upon itself, you can ask ChatGPT to tell you the story. It is too long to repeat here.
Here is a sample prompt that I used to remind myself:
> I am interested in the history of General Electric and the trouble that their financing units brought in the early to mid 2000s. Can you tell me more?Are we replacing "Let me google that for you" with "Here is a prompt to feed ChatGPT" now?
Edit: I am not asking whether ChatGPT is better than Google Search, I am asking after the standard dodge of citing one's sources.
Fair point/question. For many of my HN responses, I first ask ChatGPT for a bit of information about the topic. For the case of GE Cap's wrecking of parent GE with excessive financialisation, I could only loosely remember the details from the 2000s. It is a long time ago! That prompt that I shared gave a reply that was 100s of words. Too much for copy/pasta, and too hard for me to summarise briefly. Instead, I decided to share the prompt. It is not my intention to dodge sources. Plus, the newest versions of ChatGPT is pretty good about sharing sources. (Of course, the quality of sources can be debatable.) In short, it was not my intention to be snarky by sharing my ChatGPT prompt.
EDIT ---- Also, the OP was so brief about GE Cap, I realised that most readers under 30 (maybe 35) will have almost no knowledge or memory of that economic history. I wanted to offer an "intellectual carrot" (ChatGPT prompt) for anyone wishing to learn more. ----
What bothered me most about the original post was the person was putting all vendor financing in the same "bad" bucket. I disagree. I would characterise GE Cap as an infamous example! They were the worst of the worst in a generation (25 years). Most vendor financing is very boring and is used to buy big heavy things with very long operational lives. If the buyer goes bankrupt, it is (relatively) easy to repossess the big heavy thing and sell it again (probably with vendor financing again!).
Very tangentially related comment, but I remember seeing a post on a local Facebook clone with a prompt to throw at Claude to "make a custom YouTube downloader for MacOS", so the general "Here is a prompt to feed an LLM" is somewhat real for some, apparently
It's a good use case really – it'll tell it differently according to what it knows about your background, if you 'just Google it' you'll get the same maybe-appropriate results as anyone else.
Yes, cause google has been giving crap results long before chatgpt was a thing and it only got worse. Before ai it was "let me google that on reddit for you".
Google search has gone way down hill after they nerfed it and then did nothing to prevent the flood of AI slop seo websites. So unfortunately, instead of sharing links everyone now gets sent to the inefficient text generator that hallucinates nonsense and will color the average summary of a topic by whoever trained it and your most recent chat history instead.
I haven't run a Google search in two years. Your comment just made me realize that. Doing a Google search is like trying to watch cable after being on YouTube for years.
> Are you aware that all heavy industry in all highly developed nations make extensive use of vendor financing to sell their products?
The OP did mention GE Capital, the motherload of all heavy industry vendor financing. And of massaging the accounting books in order to increase shareholder value in the short term, also.
> motherload of all heavy industry vendor financing
I doubt they are bigger than other national "heavy industry" champions from East Asia and Western/Central Europe. Without checking, I would guess that the global leaders are Boeing and Airbus.The risks are different, but there's no getting around that the value of any investment is based on future cash flows and that's speculating about the future.
To the extent that Google and Anthropic are competing for AI business, Google is somewhat hedged against Anthropic winning market share. They still get data center revenue and they own equity, so that’s a consolation prize.
On the other hand, it’s increasing Google’s investment in AI, in general.
GE Capital was a different creature, riding the line of fraud in some ways. They misapplied accounting rules and had to write down or capitalize over $20B for long term care insurance.
That's what brought them down, but that could bring down anyone. My point is that vendor financing turns non-finance companies into finance companies, and brings along a huge can of worms.
I don't know the full the history of this story, but I honestly wonder if type of scandal is still possible in the United States. After Enron and Worldcomm, the US introduced Sarbanes-Oxley reporting regulations. Additionally, after the Global Financial Crisis of 2008/2009, there was a dramatic increase in regulations for banks (of all kinds) and insurance companies.
.. yet today we have Kalshi, Polymarket, et al.
Those are private gambling businesses akin to a casino, not publicly listed businesses subject to the aforementioned regulations.
GE Capital was not just vendor financing and its serious problems were not due to vendor financing. I don’t think it is a great example in any way.
$40 billion is about a quarter’s worth of profits for Google. They make that much every 3 months, what’s the risk
Hat tip. Great point. To quote J Paul Getty: "If you owe the bank $100, that's your problem. If you owe the bank $100 million, that's the bank's problem." In this case, yes, the investment is large, but not bankrupting for Google if it goes wrong.
Reciprocal agreements aren't new, sometimes they're used to gain access to a market the other party already has established a foothold in for other industry segments. These companies operate in the same general industry: tech/internet so it could be complementary services they are each after.
So far both of these companies have shown they suck at support so we know that's not it. It could be that it might help Anthropic to leverage Gemini in their competition with OpenAI and Google will take compute commitments.
Anecdata: I'm finding a lot of my "type random question in URL/search bar" has decent top Gemini answers where I don't scroll to results unless I need to dive deeper.
I agree those results ate handy, but I had several occasions where they turned out to be completely wrong. 95% correctness rate is not good enough.
Funny how Gemini generally takes into account all the words you type whereas Google search tends to ignore most words you type or otherwise direct you to results for thematically (or grammatically or semantically) similar words to what you searched but otherwise wholly irrelevant.
Google crippling search to bolster AI is a dangerous game. But without people going to competitors, what's the recourse?
They're already crippling their AI to perform what look like sponsored searches.
The plural of anecdote is not data but this does not feel like a one-off thing: I was trying to find where it would be possible to get to have a reasonable holiday, and asked Gemini to list me all the international airports in two named countries that had direct flights from my preferred departure airport. The response came back with a single proposed flight destination with "book here" prominently available.
Only once I told it that the search was NOT an impulse purchase intent and I really wanted to know the possible destinations - then did it actually come back with the list of airports that satisfied my search criteria.
Although if we are looking for the bright side, it did provide a valid and informative answer on the second try. I haven't had that kind of experience on SEO-infested Google search for quite a long time now.
In another context I might see it as vendor financing. However given that Google and Anthropic are competitors in this segment and given that Google has previously invested in them I'd rather see this as a sort of bartered stock purchase presumably for the purpose of hedging against failure. If Anthropic wins the race and it turns out to be winner takes all and you happen to own half of Anthropic then you still win half of the immediate spoils even though your internal team lost. If you view losing the race as an existential threat then having all your eggs in the one basket is a terrible proposition.
Sure, since Google is both a supplier and a competitor, it’s both vendor finance and hedging. Also, it increases their investment in AI, in general.
Arguably, too much of this kind of hedging is anti-competitive. But that doesn’t seem to be much of a problem yet?
Are we stoping too early in this analysis though?
Google versus OpenAI and Anthropic, sure, but Microsoft is deep into OpenAI. Google helping Anthropic is also putting MS into a corner (one that may even be shrinking? Copilot and openAI financing hurting their brand, rumours of deep displeasure at OpenAIs promises v returns).
Seen from afar I see Google happy to provide TPUs for money (improving Googles strategic positioning), torpedoing confidence in LLMs with their search AI summaries, and using their bankroll to force larger competitors (MS in particular), to keep investments high regardless of performance and user revolts and internal tensions with Sam Altmans sales approach. Plus, Anthropic is in ‘the lead’ right now product wise, so grooming them as a potential purchase would also seem to be a strategic hedge in the long term.
MS is not so deep with OpenAI, it's not all black and white, they have signed several distribution deals where Claude drives Copilot [1], since Anthropic and MS are better aligned in the Enterprise market, it makes sense. It also makes sense for MS not to lose ground anywhere at this point and play with the best. Actually, any cash rich company that is not OpenAI or Anthropic wants to be close-by when any of the two needs money. That's the ultimate win they can aspire for right now, get a financial slice of frontier models on one hand while not losing revenue on the other given the existential ordeal AI represents for them.
1. https://www.microsoft.com/en-us/microsoft-365/blog/2026/03/0...
You make some good points, but this part feels like a wild overreach:
> torpedoing confidence in LLMs with their search AI summaries
That is some real tin foil hat thinking.Straightforward observations of market impact aren’t tin foil :)
Google didn’t launch LLM products despite being a tech leader, and have gotten piles of bad press for their misleading AI search summaries. They know how and why they suck. Google search is a highly popular and market facing service packaging bad summaries as “AI”. Meanwhile LLM searches threaten to disrupt Googles primary cash cow (advertising around search).
Here on HN, on Reddit, and media writ large, a lot of the “AI” failure stories are not about ChatGPT hallucinations, it’s the shockingly wrong search summaries from Google, undermining consumer confidence and breaching trust.
ChatGPT and other LLM providers rarely show conflicting source material side by side with misleading text gen. The number one search provider who leads in some LLM tech does though, routinely, looking incompetent and generating negative “AI” sentiment through repeated failures at mass scale…
So the theory here is either that the best search org in the world filled with geniuses can’t tell they’re pooping on their own product and profitability and aren’t fixing it because they can’t/won’t… … or <tinfoil mode engaged>… Google already makes money and is happy with substandard product and market performance in the cases where it hurts the necessary hype critical to other businesses but not themselves (while also pre-positioning in case LLM search becomes essential).
Win/win/win strategy with a substandard product, versus Google not being aware of what their biggest product is doing.
Googles AI summaries are doing lotsa work to make AI summaries seem terrible. I ascribe profit motives to their actions. Ascribing incompetence seems naive and irreconcilable with their strategic corporate history.
> Arguably, too much of this kind of hedging is anti-competitive. But that doesn’t seem to be much of a problem yet?
By the time it is a problem, it will be too late.
How can there be a "winner takes it all" situation with AI?
OpenAI lead the game while they were best. Antrophic followed and got better. Now openAI is catching up again and also google with gemini(?) ... and the open weight models are 2 years behind.
Any win here seems only temporary. Even if a new breakthrough to a strong AI happen somehow.
Look at the "winner takes all" situation in web search. Of course other search engines exist, but the scale of the Google search operation allows it to do things that are uneconomical for smaller players.
Recursive self-improvement is one argument. Otherwise winner takes all seems much less likely than a OpenAI/Anthropic duopoly. For the best models, obviously other providers will have plenty of uses, but even looking at the revenue right now it's pretty concentrated at the top.
So if I'm Google I'd want a decent chunk of at least one of them.
What is the argument for a duopoly when Kimi and Deepseek models are only months behind?
It’s a commodity in the making.
The argument is based on one of these companies hitting the singularity, making it impossible for any other company to catch up ever. I still think it's way more likely we'll see a typical S-curve where innovation starts to plateau. But even a small chance of it happening in the future is worth a lot of money today.
There's a massive thinking gap in this singularity thinking. We ARE the singularity. It has been exponential all the way back to the big bang. First the stars, the solar system, life, consciousness, language, computers, the internet. Yes it is speeding up and that is exciting, cause we are going to experience a lot in our lifetimes. We have a lot of exponential growth to go before progress becomes instant. There are physical limits, too. Power generation for example. I can't believe on what dumb shit people bet the world economy on.
That's certainly how it looks right now but where's the guarantee? What happens if it turns out that deep learning on its own can't achieve AGI but someone figures out a proprietary algorithm that can? That sort of thing. Metaphorically we're a bunch of tribesmen speculating about the future potential outcomes of the space race (ie the impacts, limits, and timeline of ASI).
Imagine such an AI exists. What good is AI that is so good that you cannot sell API access because it would help others to build equivalently powerful AI and compete with you?
If you gatekeep, you will not make back the money you invested. If you don't gatekeep, your competitors will use your model to build competing models.
I guess you can sell it to the Department of War.
> What good is AI that is so good that you cannot sell API access because it would help others to build equivalently powerful AI and compete with you?
Its awesome and world dominating, you just don’t sell access to that AI, you instead directly, by yourself, dominate any field that better AI provides a competitive advantage in as soon as you can afford to invest the capital to otherwise operate in that field, and you start with the fields where the lowest investment outside of your unmatchable AI provides the highest returns and, and plow the growing proceeds into investing in successive fields.
Obviously, it is even more awesome if you are a gigantic company with enormous cash to to throw around to start with when you develop the AI in question, since that lets you get the expanding domination operation going much quicker.
It's not clear to me that one horse-sized AI allows you to outcompete 100 duck-sized AIs in use by everyone else once you factor in the non-intelligence contributions that the others with weaker AIs bring to the table.
There's a lot more to building a successful product than how smart your engineers/agents are, how many engineers/agents you have, and capital.
Google, for example, can be extremely dysfunctional at launching new products despite unimaginably vast resources. They often lack intangible elements to success, such as empathizing with your customers' needs.
If we were in a world where AI was not already widespread, then I would agree that having strong AI would be an immense competitive advantage. However, in a world where "good enough" AI is increasingly widespread, the competitive advantage of strong AI diminishes as time goes on.
Yup. That doesn't really take a full-blown AGI on the path to ASI on the path to godhood - it'll take a bit better and more reliable LLM with a decent harness.
That's why I've been saying that the entire software industry is now living on borrowed time. It'll continue at the mercy of SOTA LLM operators, for as long as they prefer to extract rent from everyone for access to "cognition as a service". In the meantime, as the models (and harnesses) get better, the number of fields SOTA model owners could dominate overnight, continues to grow.
(One possible trigger would be the open models. As long as the gap between SOTA and open is constant or decreasing, there will be a point where SOTA operators might be forced to cannibalize the software industry by a third party with an open model and access to infra pulling the trigger first.)