Anthropic: Developing a Claude Code competitor using Claude Code is banned

twitter.com

317 points by behnamoh 2 days ago


ctoth - 2 days ago

I was all set to be pissed off, "you can't tell me what I can make with your product once you've sold it to me!" but no... This outrage bait hinges on the definition of "use"

You can use Claude Code to write code to make a competitor for Claude Code. What you cannot do is reverse engineer the way the Claude Code harness uses the API to build your own version that taps into stuff like the max plan. Which? makes sense?

From the thread:

> A good rule of thumb is, are you launching a Claude code oauth screen and capturing the token. That is against terms of service.

nnutter - 2 days ago

Anthropic showed their true colors with their sloppy switch to using Claude Code for training data. They can absolutely do what they want but they have completely destroyed any reason for me to consider them fundamentally better than their competitors.

jimmydoe - 2 days ago

I still did not get very clearly what can and can’t zed, open code and other do to use max plan? Developers want to use these 3p client and pay you 200 a month, why are you pissing us off. I understand some abuser exists but you will never really be possible to ban them 100%, technically.

Very poor communication, despite some bit of reasonable intention, could be the beginning of the end for Claude Code.

d4rkp4ttern - 2 days ago

This message from the Zed discord (from a zed staffer) puts it clearly, I think:

“….you can use Claude code in Zed but you can’t hijack the rate limits to do other ai stuff in zed.”

This was a response to my asking whether we can use the Claude Max subscription for the awesome inline assistant (Ctl+Enter in the editor buffer) without having to pay for yet another metered API.

The answer is no, the above was a response to a follow up.

An aside - everyone is abuzz about “Chat to Code” which is a great interface when you are leaning toward never or only occasionally looking at the generated code. But for writing prose? It’s safe to say most people definitely want to be looking at what’s written, and in this case “chat” is not the best interaction. Something like the inline assistant where you are immersed in the writing is far better.

falloutx - 2 days ago

Opencode is much better anyway and it doesnt change its workflow every couple weeks.

Lars147 - 2 days ago

Xcancel Link: https://xcancel.com/SIGKITTEN/status/2009697031422652461

otikik - 2 days ago

Behold the "Democratization of software development".

pton_xd - 2 days ago

As long as I have a Claude subscription, why do they care what harness I use to access their very profitable token inference business?

Centigonal - 21 hours ago

It seems like Anthropic is taking the Apple approach to these apps. Apple made it hard to mod their hardware, or run other OSs on the hardware, or run their OS on other hardware, or use other app stores on their phones. Basically, they want to make it so that you buy into the Apple stack with one or two all-or-nothing decisions, with very little room for mixing and matching.

Not a very hacker-friendly strategy, but Apple's market cap is pretty big. I think it comes down to whether Anthropic can make a product with enough of a lead over competitors to offset the restrictions.

Imustaskforhelp - 2 days ago

This is highly monopolistic action in my opinion from Anthropic which actively feel the most hostile towards developers.

This really shouldn't be the direction Anthropic should even go about. It is such a negative direction to go through and they could've instead tried to cooperate with the large open source agents and talking with them/communicating but they decide to do this which in the developer community is met with criticism and rightfully so.

narmiouh - 2 days ago

I wonder how will this affect future Anthropic products, if prior art/products exist that have already been built using claude.

If this is to only limit knowledge distillation for training new models or people Copying claude code specifically or preventing max plan creds used as API replacement, they could properly carve exceptions rather than being too broad which risks turning away new customers for fear of (future) conflict

VoxPelli - 2 days ago

Sounds like standard terms from lawyers – not very friendly to customers, very friendly to company – but is it particularly bad here?

I remember when I was part of procuring an analytics tool for a previous employer and they had a similar clause that would essentially have banned us from building any in-house analytics while we were bound by that contract.

We didn't sign.

zingar - 2 days ago

Is this a standard tech ToU item?

Is this them saying that their human developers don’t add much to their product beyond what the AI does for them?

pjmlp - a day ago

I still remember the backslash Borland got when they had the clever idea to forbid writing compilers with Borland C++, and naturally had to rollback from.

Some people never learn from history, it seems.

bionhoward - 2 days ago

Yup, I’ve been crowing about these customer noncompetes for years now and it’s clear Anthropic has one of the worst ones. The real kicker is, since Claude Code can do anything, you’re technically not allowed to use it for anything, and everyone just depends on Anthropic not being evil

throwaw12 - 2 days ago

Doesn't this make using Claude Agents SDK dangerous?

Suppose I wrote custom agent which performs tasks for a niche industry, wouldn't it be considered as "building a competing service", because their Service is performing Agentic tasks via Claude Code

sharat87 - a day ago

I'm taking this more as a "pricing" change. Like, if you pay 200$ then you can use inference only in these limited scopes. If you want more unrestricted access to inference, use the API token pricing.

Which, seems fine? They could've just not offered the 200$ plan and perhaps nobody would've complained. They tried it, noticed it being unsustainable, so they're trying to remodel it to it _is_ sustainable.

I think the upset is misplaced. :shrug:

with - 2 days ago

I think there are issues with Anthropic (and their ToS); however, banning the "harnesses" is justified. If you're relying on scraping a web UI or reverse-engineering private APIs to bypass per-token costs, it's just VC subsidy arbitrage. The consumer plan has a different purpose.

The ToS is concerning, I have concerns with Anthropic in general, but this policy enforcement is not problematic to me.

(yes, I know, Anthropic's entire business is technically built on scraping. but ideally, the open web only)

mcintyre1994 - 2 days ago

https://xcancel.com/SIGKITTEN/status/2009697031422652461

This tweet reads as nonsense to me

It's quoting:

> This is why the supported way to use Claude in your own tools is via the API. We genuinely want people building on Claude, including other coding agents and harnesses, and we know developers have broad preferences for different tool ergonomics. If you're a maintainer of a third-party tool and want to chat about integration paths, my DMs are open.

And the linked tweet says that such integration is against their terms.

The highlighted term says that you can't use their services to develop a competing product/service. I don't read that as the same as integrating their API into a competing product/service. It does seem to suggest you can't develop a competitor to Claude Code using Claude Code, as the title says, which is a bit silly, but doesn't contradict the linked tweet.

I suspect they have this rule to stop people using Claude to train other models, or competitors testing outputs etc, but it is silly in the context of Claude Code.

PeterStuer - 20 hours ago

If I remember correctly back in the day the EULA on Visual Studio components also dissalowed development of competitors to Microsoft (office) products.

throw1235435 - 2 days ago

Software dev's training the model with their code making themselves obsolete is encouraged not banned.

Claude code making itself obsolete is banned.

lobito25 - a day ago

Claude code quality degradation is currently the worst I've ever seen.

dev_l1x_be - 2 days ago

This whole situation is getting out of hand. With the development speed AI has it is a matter of time to have a competitor that has 80% what CC does and it is going to be good enough for most of us. Trying Windows the way into this category by Anthropic is not the smartest move.

afinlayson - a day ago

If you are old enough - feels a little like the bitkeeper/git situation

mmaunder - 2 days ago

Imagine a world where Google has its product shit together and didn’t publish the AIAYN paper, and has the monopoly on LLMs and they are a black box to all outsiders. It’s terrifying. Thankfully we have extreme competition in the space to mitigate anything like this. Let’s hope it stays that way.

zkmon - 2 days ago

We sell you our hammer, but you are prohibited from using it to make your own hammer?

Footprint0521 - a day ago

Lol I used codex to reverse engineer itself to farm the oauth and even made it OpenAI API compatible

llmslave3 - 2 days ago

I find it slightly ironic that Anthropic benefits from ignoring intellectual property but then tries to enforce it on their competitors.

How would they even detect that you used CC on a competitor? There's surely no ethical reason to not do it, it seems unenforceable.

akomtu - 2 days ago

AI is built by violating all rules and moral codes. Now they want rules and moral code to protect them.

mmaunder - 2 days ago

One day all programs will belong to the AI that made them, which was trained in a time before we forgot how to program.

Yizahi - 2 days ago

The corporate hypocrisy is reaching previously unseen levels. Ultra-wealthy thieves who got rich upon stealing a dragon horde worth of property are now crying foul about people following the same "ideals". What an absolute snowflakes. LLM sector is the only one where I'm rooting for Chinese corporations trouncing the incumbents, thus demonstrating FAFO principle in practice.

oxag3n - 2 days ago

What happens if there's a pull request and it was generated using Claude Code?

Can they sue maintainers?

bastawhiz - 2 days ago

I think this is kind of a nothingburger. This reads like a standard clause in any services contract. I also cannot (without a license):

1. Pay for a stock photo library and train an image model with it that I then sell.

2. Use a spam detection service, train a model on its output, then sell that model as a competitor.

3. Hire a voice actor to read some copy, train a text to speech model on their voice, then sell that model.

This doesn't mean you can't tell Claude "hey, build me a Claude Code competitor". I don't even think they care about the CLI. It means I can't ask Claude to build things, then train a new LLM based on what Claude built. Claude can't be your training data.

There's an argument to be made that Anthropic didn't obtain their training material in an ethical way so why should you respect their intellectual property? The difference, in my opinion, is that Anthropic didn't agree to a terms of use on their training data. I don't think that makes it right, necessarily, but there's a big difference between "I bought a book, scanned it, learned its facts, then shredded the book" and "I agreed to your ToS then violated it by paying for output that I then used to clone the exact behavior of the service."

pnathan - 2 days ago

I am _much_ more interested in i. building cool software for other things and ii. understanding underlying underlying models than building "better claude code".

- 18 hours ago
[deleted]
ChrisArchitect - 2 days ago

Related:

Anthropic blocks third-party use of Claude Code subscriptions

https://news.ycombinator.com/item?id=46549823

newaccount1000 - 2 days ago

does OpenAI have the same restriction?

slowmovintarget - 2 days ago

"You are not allowed to use words found in our book to write your own book if you read our book."

Anthropic has just entered the "for laying down and avoiding" category.

ronbenton - 2 days ago

This falls under "lmao even" right? Like, come on, the entire business model of most generative AI plays right now hinges on IP theft.

orochimaaru - 2 days ago

Is this targeted at cursor?

FpUser - 2 days ago

First they've raided all the content (I do not consider this bad), now they want to set terms? Well go fuck yourselves.

insin - 2 days ago

Claude Code, make a Claude Code competitor. Make no mistakes.

miohtama - 2 days ago

"We stole the whole Internet, but do not dare to steal us”

shmerl - 2 days ago

Lol. Next will be, "Replacing our CEOs with AI is banned".

starkeeper - 2 days ago

[flagged]

wilg - 2 days ago

Can anyone read? The text doesn't mention Claude Code or anything like it at all.

I swear to god everyone is spoiling for a fight because they're bored. All these AI companies have this language to try to prevent people from "distilling" their model into other models. They probably wrote this before even making Claude Code.

Worst case scenario they cancel your account if they really want to, but almost certainly they'll just tweak the language once people point it out.