Anthropic irks White House with limits on models’ use

semafor.com

214 points by mindingnever 7 hours ago


impossiblefork - 6 hours ago

Very strange writing from semafor.com

>For instance, an agency could pay for a subscription or negotiate a pay-per-use contract with an AI provider, only to find out that it is prohibited from using the AI model in certain ways, limiting its value.

This is of course quite false. They of course know the restriction when they sign the contract.

owenthejumper - 6 hours ago

This feels like a hit piece by semafor. A lot of the information in there is purely false. For example, Microsoft's AI Agreemeent says (prohibits):

"...cannot use...For ongoing surveillance or real-time or near real-time identification or persistent tracking of the individual using any of their personal data, including biometric data, without the individual’s valid consent."

LeoPanthera - 6 hours ago

One of the very few tech companies who have refused to bend the knee to the United States' current dictatorial government.

saulpw - 6 hours ago

Gosh, I guess the SaaS distribution model might give companies undesirable control over how their software can be used.

Viva local-first software!

Terretta - 5 hours ago

Here's an entertaining example from 20 years ago:

By using the Apple Software, you represent and warrant that you ... also agree that you will not use these products for any purposes prohibited by United States law, including, without limitation, the development, design, manufacture or production of missiles, or nuclear, chemical or biological weapons. -- iTunes

No production of missiles with iTunes? Curses, foiled again.

TheServitor - 3 hours ago

"Eventually, though, its politics could end up hurting its government business."

Good? What if, and I know how crazy this sounds, not using AI to surveil people was a more desirable goal than the success of yet another tech company at locking in government pork and subsidies?

j2kun - 5 hours ago

The US government can train their own damn LLM if they want an unrestricted one so bad.

- 6 hours ago
[deleted]
tracker1 - 3 hours ago

What is the govt expecting to do in combination of surveillance and Antrhopic models? I'm not convinced this is any kind of valid job function.

stevage - 3 hours ago

This isn't a principled stand, it's just a negotiating tactic. They'll allow it when the price is right.

SilverbeardUnix - 6 hours ago

Honestly makes me think better of Anthropic. Lets see how long they stick to their guns. I believe they will fold sooner rather than later.

stephenlf - an hour ago

I Anthropic

sfink - 5 hours ago

First, contracts often come with usage restrictions.

Second, this article is incredibly dismissive and whiny about anyone ever taking safety seriously, for pretty much any definition of "safety". I mean, it even points out that Anthropic has "the only top-tier models cleared for top secret security situations", which seems like a direct result of them actually giving a shit about safety in the first place.

And the whining about "the contract says we can't use it for surveillance, but we want to use it for good surveillance, so it doesn't count. Their definition of surveillance is politically motivated and bad"! It's just... wtf? Is it surveillance or not?

This isn't a partisan thing. It's barely a political thing. It's more like "But we want to put a Burger King logo on the syringe we use for lethal injections! Why are you upset? We're the state so it's totally legal to be killing people this way, so you have to let us use your stuff however we want."

SanjayMehta - 2 hours ago

So a private company sanctioned the US government? And now the US government is upset?

I do love the smell of hypocrisy early in the morning.

- 6 hours ago
[deleted]
FrustratedMonky - 5 hours ago

Wasn't a big part of AI 2027 that government employees became overly reliant on AI and couldn't function without it. So guess we are still on track to hit that timeline.

gowld - 4 hours ago

> The policy doesn’t specifically define what it means by “domestic surveillance” in a law enforcement context and appears to be using the term broadly, creating room for interpretation.

> Other AI model providers also list restrictions on surveillance, but offer more specific examples and often have carveouts for law enforcement activities. OpenAI’s policy, for instance, prohibits “unauthorized monitoring of individuals,” implying consent for legal monitoring by law enforcement.

This is unintentionally (for the author) hilarious. It's a blatant misinterpretation of the language, while complimenting the clarity of the lanuage. Who "authorizes" "monitoring of individuals"? If an executive agency monitors an individual in violation of a court order, is that "authorized" ?

Filligree - 7 hours ago

[flagged]

chatmasta - 6 hours ago

Are government agencies sending prompts to model inference APIs on remote servers? Or are they running the models in their own environment?

It’s worrying to me that Anthropic, a foreign corporation (EDIT: they’re a US corp), would even have the visibility necessary to enforce usage restrictions on US government customers. Or are they baking the restrictions into the model weights?

g42gregory - 5 hours ago

No judgement here, but a US-based corporation refusing services to the US Government?

While the terms of service are what they are, the US Government can withdraw its military contracts from Anthropic (or refuse future contracts if they don't have any so far). Or softly suggest to its own contractors to limit their business dealings with Anthropic. Then Anthropic will have hard time securing computing from NVIDIA, AWS, Google, MSFT, Oracle, etc...

This won't last.