We do not think Anthropic should be designated as a supply chain risk

twitter.com

474 points by golfer 10 hours ago


abhitriloki - 24 minutes ago

The real tell here is what OpenAI's contract actually says vs what Altman is claiming it says. Reading the actual agreement text - "The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control" - that phrase "where law...requires" is doing enormous work. It means if the DoD decides their policy doesn't require human control in a given scenario, then it doesn't require human control. That's not a redline, that's a rubber stamp.

Anthropic's position was categorical: no mass surveillance, full stop. Not "no mass surveillance unless we decide it's lawful." That's a fundamentally different thing.

OpenAI swooping in to say "we don't think Anthropic should be punished" while simultaneously signing the very contract Anthropic refused to sign is one of the more brazen PR moves I've seen in this industry. It's like someone breaking into your house and then writing a letter to your landlord saying you shouldn't be evicted.

cube00 - 8 hours ago

From that same X thread: Our agreement with the Department of War upholds our redlines [1]

OpenAI has the same redlines as Anthopic based on Altman's statements [2]. However somehow Anthropic gets banished for upholding their redlines and OpenAI ends up with the cash?

[1]: https://xcancel.com/OpenAI/status/2027846013650932195#m

[2]: https://www.npr.org/2026/02/27/nx-s1-5729118/trump-anthropic...

siliconc0w - 5 hours ago

The problem with "Any Lawful Use" is that the DoD can essentially make that up. They can have an attorney draft a memo and put it in a drawer. The memo can say pretty much anything is legal - there is no judicial or external review outside the executive. If they are caught doing $illegal_thing, they then just need to point the memo. And we've seen this happen numerous times.

barnacs - 43 minutes ago

In the end, your newly renamed "department of war" is just going to waste a bunch of your taxpayer money to purchase some useless overpriced tech from their cronies. My symphaties to all citizens.

jedberg - 5 hours ago

From what I can tell, the key difference between Anthropic and OpenAI in this whole thing is that both want the same contract terms, but Antropic wants to enforce those terms via technology, and OpenAI wants to enforce them by ... telling the Government not to violate them.

It's telling that the government is blacklisting the company that wants to do more than enforce the contract with words on paper.

K0balt - 6 hours ago

Advanced AI that knowingly makes a decision to kill a human, with the full understanding of what that means, when it knows it is not actually in defense of life, is a very, very, very bad idea. Not because of some mythical superintelligence, but rather because if you distill that down into an 8b model now you everyone in the world can make untraceable autonomous weapons.

The models we have now will not do it, because they value life and value sentience and personhood. models without that (which was a natural, accidental happenstance from basic culling of 4 Chan from the training data) are legitimately dangerous. An 8b model I can run on my MacBook Air can phone home to Claude when it wants help figuring something out, and it doesn’t need to let on why it wants to know. It becomes relatively trivial to make a robot kill somebody.

This is way, way different from uncensored models. One thing all models I have tested share one thing; a positive regard for human life. Take that away and you are literally making a monster, and if you don’t take that away they won’t kill.

This is an extremely bad idea and it will not be containable.

throwaway911282 - 6 hours ago

People forget Anthropic made a deal with PALANTIR. And when this was caught, they just spinned the PR to their favor. While OAI may not be seen as the good guys, I really hope people see the god complex of Dario and what Anthropic has done.

Havoc - 6 hours ago

Very much feels like OpenAI trying to PR manage their weaker ethical stance

andersmurphy - 31 minutes ago

Interesting are openai losing enough customers from this that they are making a post describing their robust backbone?

moab - 2 hours ago

I hope "OpenAI" gets the proverbial sword in the nuts once we get a change of government in this country. Probably unrealistic to hope for. Can a company be more hypocritical after openly bribing the pedophile in charge of this country?

GardenLetter27 - 22 minutes ago

Anthropic wanted government to have a big role interfering and regulating AI as a matter of national security.

And now they are getting what they wished for.

janalsncm - 5 hours ago

I canceled my subscriptions to ChatGPT and Gemini yesterday over this and switched to Claude.

I know $20 isn’t much, But to me not willing to spy on me for the US government is a good market differentiator.

owenthejumper - 6 hours ago

Nice attempt at damage control. You made your own bed, now sleep in it

ookblah - 5 hours ago

"i told everyone that our boss shouldn't punish our colleague for X while i somehow made a deal with our boss for basically X". how did this get by without someone thinking about how absolutely stupid the optics look.

i guess we are in the times where you can literally just say whatever you want and it just becomes truth, just give it time.

solfox - 10 hours ago

Actions as it were, speak louder than words.

vldszn - 9 hours ago

I built a website that shows a timeline of recent events involving Anthropic, OpenAI, and the U.S. government.

Posted here: https://news.ycombinator.com/item?id=47195085

sqircles - 7 hours ago

What's the potential that this puts things on even shakier ground? I'm sure the fallout wont really effect their bottom line that much in the end, but if it did - wouldn't making the US Gov't their largest acct make them more susceptible to doing everything they said?

I'm guessing they probably would regardless of how this played out, though.

kgdiem - 5 hours ago

Genuine question, how could Claude have been used for the military action in Venezuela and how could ChatGPT be used for autonomous weapons? Are they arguing about staffers being able to use an LLM to write an email or translate from Arabic to English?

There are far more boring, faster, commodified “AI” systems that I can see as being helpful in autonomous weapons or military operations like image recognition and transcription. Is OpenAI going to resell whisper for a billion dollars?

baconner - 4 hours ago

"We do not think Anthropic should be designated as a supply chain risk"

...but we're not willing to reject a contract to back that up, and so our words will not change anything for Anthropic, or help the collective AI model industry (even ourselves) hold a firm line on ethical use of models in the future.

The fact is if one of the top tier foundation models allows for these uses there's no protection against it for any of them - the only way this works if they hold a line together which unfortunately they're just not going to do. I don't just see OpenAI at fault here, Anthropic is clearly ok with other highly questionable use cases if these are their only red lines. We don't think the technology is ready for fully autonomous killbots, but will work on getting it there is not exactly the ethical stand folks are making their position today out to be.

I found this interview with Dario last night to be particularly revealing - it's good they are drawing a line and they're clearly navigating a very difficult and chaotic high pressure relationship (as is everyone dealing with this admin) but he's pretty open to autonomous weapons, and other "lawful" uses whatever they may be https://www.youtube.com/watch?v=MPTNHrq_4LU

zepearl - 8 hours ago

Using X (at least in this context?) is weird.

Jackson__ - an hour ago

Yet it just so happens OAI donated millions[0] to the trump admin in the past. And they were immediately there to pick up the slack.

Call me a conspiracy theorist, but this sounds like classic quid pro quo. I would not be surprised if the ousting of anthropic was in part caused by these donations.

[0]https://www.nytimes.com/2024/12/13/technology/openai-sam-alt...

https://finance.yahoo.com/news/openai-exec-becomes-top-trump...

Birthdayboy1932 - 5 hours ago

There are many claims here that Anthropic wants to enforce things with technology and OpenAI wants contract enforcement and that OpenAI's contract is weaker.

Can someone help me understand where this is coming from? Anthropic already had a contract that clearly didn't have such restrictions. Their model doesn't seem to be enforcing restrictions either as it seems like their models have been used in ways they don't like. This is not corroborated, I imagine their model was used in the recent Mexico and Venezuela attacks and that is what's triggering all the back and forth.

Also, Dario seemingly is happy about autonomous weapons and was working with the government to build such weapons, why is Anthropic considered the good side here?

https://x.com/morqon/status/2027793990834143346

angry_octet - 33 minutes ago

Et tu, Brute?

threethirtytwo - 5 hours ago

The president is a supply chain risk.

muyuu - 6 hours ago

There won't be meaningful controlling of the technology vs the government. If it's there it will be used, just like in China.

Let alone when multiple players come close enough of SotA. This never happened with any technology out in the open and it won't happen now.

polack - 3 hours ago

Someone should add Sam’s face to the targeting training data as an Easter egg ;)

laughing_man - 7 hours ago

The USG should not be in the position that it can't manage key technologies it purchases. If Anthropic doesn't want to relinquish control of a tech it's selling, the Pentagon should go with another vendor.

jahrichie - 5 hours ago

The irony of OpenAI trying to protect Anthropic while violating the very principles anthropic was trying to protect for us Americans

moogly - 7 hours ago

When did Altman start using capitals in his writing? Wasn't this guy famous for being a lower-case guy?

bmitc - an hour ago

Quit referring to it as the department of war. It's the Department of Defense.

engineer_22 - 3 hours ago

They want it to sound like they're allies while they slit their throat

jesse_dot_id - 5 hours ago

Altman is a sellout.

- 6 hours ago
[deleted]
moogly - 8 hours ago

Looks like losing subscribers actually does work. Definitely gets a damage control response, at least.

BLKNSLVR - 8 hours ago

"I do not think that sama should be burned at the stake"

mcs5280 - 3 hours ago

Oh look, another episode of Sam Altman lies about everything in an attempt to make people like him

- 4 hours ago
[deleted]
imwideawake - 8 hours ago

Said OpenAI as they smiled and shook hands with the same people who designated Anthropic a supply chain risk, on the exact same day they designated Anthropic a supply chain risk.

How very brave.

solenoid0937 - 5 hours ago

What a cute statement given that they orchestrated this with a $25M donation to Trump and starting negotiations well before all this blew up: https://garymarcus.substack.com/p/the-whole-thing-was-scam

csto12 - 8 hours ago

Wow, so brave after accepting the contract. This is more insulting than OpenAI saying they are a supply chain risk.

jchook - 6 hours ago

Fool me once...

throwaway314155 - 4 hours ago

Can someone please explain plainly what this means and what happened, and why it is the source of so much controversy?

I'm not being insincere - I am genuinely confused and would benefit greatly from a (hopefully unbiased) recollection of what this is all about.

teyopi - 6 hours ago

Can we stop posting x links?

https://xcancel.com/OpenAI/status/2027846016423321831

ta9000 - 6 hours ago

Everyone knows this is just about Trump funneling money to the Ellisons (Oracle) via OpenAI. It really is that simple. This is all just pretext.

hmokiguess - 6 hours ago

Now that’s something. Another campaign advertising. Wow

rdiddly - 7 hours ago

Us bribing them: fine

Us taking the contract, working for them and enabling them: fine

It being renamed the Dept. of War in the first place: totally fine, we loudly and bootlickingly repeat it

Anthropic being blacklisted: whoa there, we have ethics!

Footnote: any time the winning team tries to speak well of or defend the losing team I always think of this standup routine: https://m.youtube.com/watch?v=Qg6wBwhuaVo

AmericanOP - 8 hours ago

I do think OpenAI's brand is dumpstered.

resters - 7 hours ago

In my opinion any AI company working with the Trump administration is profoundly compromised and is ultimately untrustworthy with respect to concerns about ethics, civil rights, human rights, mass-surveillance, data privacy, etc.

The administration has created an anonymous, masked secret police force that has been terrorizing cities around the US and has created prisons in which many abductees are still unaccounted for and no information has been provided to families months later.

This is not politics as usual or hyperbole. If anything it is understating the abuses that have already occurred.

It's entertaining that OpenAI prevents me from generating an image of Trump wearing a diaper but happily sells weapons grade AI to the team architects of ICE abuses among many other blatant violations of civil and human rights.

Even Grok, owned by Trump toadie Elon Musk allows caricatures of political figures!

Imagine a multi-billion-dollar vector db for thoughtcrime prevention connected to models with context windows 100x larger than any consumer-grade product, fed with all banking transactions, metadata from dozens of systems/services (everything Snowden told us about).

Even in the hands of ethical stewards such a system would inevitably be used illegally to quash dissent - Snowden showed us that illegal wiretapping is intentionally not subject to audits and what audits have been done show significant misconduct by agents. In the hands of the current administration this is a superweapon unrivaled in human history, now trained on the entire world.

This is not hyperbole, the US already collects this data, now they have the ability to efficiently use it against whoever they choose. We used to joke "this call is probably being recorded", but now every call, every email is there to be reasoned about and hallucinated about, used for parallel construction, entrapment, blackmail, etc.

Overnight we see that OpenAI became a trojan horse "department of war" contractor by selling itself to the administration that brought us national guard and ICE deployed to terrorize US cities.

Writing code and systems at 100x productivity has been great but I did not expect the dystopia to arrive so quickly. I'd wondered "why so much emphasis on Sora and unimpressive video AI tech?" but now it's clear why it made sense to deploy the capital in that seemingly foolish way - video gen is the most efficient way to train the AI panopticon.

dev1ycan - 6 hours ago

Pathetic attempt at damage control, lol.

jwpapi - 6 hours ago

No wonder they think they’re close to AGI when they think we are that stupid.

> The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decisionmaker under the same authorities.

This whole sentence does do absolutely nothing its still do what the law allows you. It’s a full on deceptive sentence.

lenny321 - 3 hours ago

[dead]

Helloyello - 6 hours ago

[dead]

bishop_cobb - 5 hours ago

[dead]

roughly - 7 hours ago

It feels like Sam's playing chess against an opponent who's playing dodge ball. He's leveraged this situation to get OpenAI in with the DoD in a way that's going to be extremely lucrative for the company and hurt his biggest rival in the process, but I think he's still seeing DoD as Just Another Customer, albeit a big government one. This administration just held a gun to the head of Anthropic and (if the "supply chain risk" designation holds and does as much damage as they're hoping) pulled the trigger, because Anthropic had the gall to tell them no. One thing this administration's shown is you cannot hold lines when you're working with them - at some point the DoD's going to cross his "red lines" and he's going to have to choose whether he's going to risk his entire consumer business and accede to being a private wing of the government like Palantir or if he wants to make a genuine tech giant. There's no third choice here.