Anthropic announces proof of distillation at scale by MiniMax, DeepSeek,Moonshot

twitter.com

111 points by Jimmc414 6 hours ago


logicprog - 5 hours ago

Just reading the headline, I say good.

A) These models are trained by ignoring IP. It is hypocritical and absurd to then try to assert IP over them. And I am for the destruction of IP on all ends.

B) What this essentially means is that the Chinese labs are taking the work of these mega corporations into making it freely accessible to other labs and businesses, to serve inference, fine tune, and host privately on prem. That's clearly a good thing for competition in the market as a whole.

C) I don't see why we should have to duplicate the massive energy and infrastructure investment of building foundation models over and over forever just because we want to preserve the IP rights of a few companies. That seems a shame and it seems better to me for everything to learn from everything else for the whole ecosystem to get better by topping each other and building off each other; that's also why publishing research into the architecture and training of these models is so much better than what the proprietary labs do (keeping everything a secret), although tbf Anthropic's interpretability research is cool.

D) these Chinese models give 90% of the performance of frontier proprietary models at a 10th or 20th of the cost. That seems like a win for everyone. Not to mention the fact that this distilling also allows them to make much smaller local models that everyone can run. This is a win for actual democratization, decentralization, and accessibility for the little guy.

impulser_ - 6 hours ago

Why would anyone care about this at all?

MiniMax, DeepSeek, and Moonshot are all releasing models for the public to use for free.

Anthropic, OpenAI, Google ect have been scraping information to train their models that they had no right in scraping yet when these company pay them to scrap data we are suppose to be worried?

Labs like Anthropic always preach we are trying to build AI for everyone while releasing expensive models that are closed source.

The only reason AI is affordable at all is because of these Chinese AI labs.

paxys - 5 hours ago

It's crazy for their official account to post this when Anthropic itself is fighting multiple high-profile lawsuits over its unauthorized use of proprietary content to train its models. Did no one run this by legal?

cs702 - 6 hours ago

It's been known for a long while that model outputs = data for training another model to copy the original model's behavior, also known as distillation.

What I didn't know is that the three groups mentioned "created over 24,000 fraudulent accounts and generated over 16 million exchanges with Claude, extracting its capabilities to train and improve their own models." There's some irony in that, given that Anthropic and all other established AI shops have been criticized for using copyrighted materials without permission to train their own models. I wouldn't be shocked if we subsequently find out tat every major AI shop has secretly engaged in distillation at some point in the past.

Still, wow, 24,000 accounts. I can't help but wonder, how many other AI shops have surreptitious accounts with other AI shops right now?

falcor84 - 6 hours ago

Interesting, and my main take away is that ~16 million sessions is enough to distill Claude. That's extremely doable - obviously, as it's been done repeatedly - but it just looks very feasible in general.

If I think of the number of lessons and educational conversations that a human would have to acquire their lifetime knowledge, I would hazard to say that AI-to-AI learning no longer requires many orders of magnitude beyond that.

MiSeRyDeee - 5 hours ago

Kudos to them then, for doing such a good job at distillation. Only 16 million chats(shared by multiple labs/models) needed for distillation for getting mostly on par performance at 1/10th - 1/50th cost, keep up keeping up!

Alifatisk - an hour ago

Reading the comment section in this thread gave me a good laugh, no one is buying into this.

If anything, it’s thanks to these Chinese labs that I’m able to have something like glm-5 for 7$ quarterly or kimi k2.5 for 2$ month, while getting results close to Claude. I am grateful. Looking forward to the new Deepseek model.

But one thing that makes me curious is how, lets say, Deepseek is doing this. Are they paying cheap workers to buy subscriptions and chat to gather data? Have they purchased lots of api keys and using automated scripts to feed Claude data and collect the output? How are they doing this?

throwfaraway4 - 6 hours ago

Company that rips-off creators to build their product complains other companies are doing the same to them.

iagorodriguez - 5 hours ago

I was not emotionally prepared for this level of humor today, its Monday, please!

aquir - 5 hours ago

Not nice but the frontier labs "distilled the whole internet" using the common crawl.

xanthor - 5 hours ago

Ironic phrasing used here. China is the only country that actually has the capacity to deeply integrate AI into industrial manufacturing in a way that will reduce costs of goods. They already have lights-off autonomous factories without AI.

armcat - 5 hours ago

This is such an insane rabbit hole. AI labs distill weights from the entirety of the internet knowledge, (mostly) without anyone's consent, which (technically) amounts to theft. However the chinchilla law dictates you need to expend X amount of energy to make this knowledge useful. Then the data law dictates that you need to shift the weights to a more useful latent space by paying maths, coding and domain experts lots of money. So you have "stolen" the data, but then paid billions to make it useful. And useful it is!

Then another lab comes, and "steals" from you - that beautiful, refined dataoil - by distilling your weights using inferior equipment but with a toolbox of ingenuity and low-level hacking tricks. They reach 90% of your performance at 20x cost reduction.

What happens when another lab distills from the distilled lab?

Who is the thief? How far will the Alice go?

Zufriedenheit - 4 hours ago

The companies scraping the whole internet without caring for anyone’s tos and illegally torrenting every single book ever written are now complaining about the output of their models being used for training. That is very ironic.

m_ke - 5 hours ago

we should probe anthropic for what accounts they made to access third party data, or which proxies they use to circumvent scraping blockers

oncallthrow - 5 hours ago

Live by the sword, die by the sword

mudkipdev - 5 hours ago

Don't throw stones from glass houses. Ask Anthropic about the proxies they use for scraping. They're well-versed on the topic

iamsaitam - 5 hours ago

At least they paid you for it.. unlike you

osiris970 - 5 hours ago

It's not illegal, just agaisnt their TOS. Your job to deal with that anthropic lol

lousken - 5 hours ago

Good, if you don't release open weights, someone else does.

ddxv - 43 minutes ago

Anthropic has never released an open weight model.

snowhale - 3 hours ago

the 16M session number is the real data point here. that's not a huge moat by any standard -- it just means detecting distillation is structurally hard, not that it isn't happening. you'd need to either detect statistical similarity in outputs (feasible but expensive) or rely on behavior probes, which get gamed fast. this announcement reads more like a legal paper trail than a technical deterrent.

kgeist - 5 hours ago

Were those 16 mln sessions used only for alignment, chat format, reasoning, etc.? Or it's possible to train a base model too? If a single session is at least 32k tokens, then it's already 0.5 trillion tokens to train on, interesting.

veselin - 5 hours ago

I think they put two things:

* Likely they will seek regulation that would ban some models. Not sure this can work, but they will certainly try.

* Likely they will not release some of their next models in the API.

gregman1 - 5 hours ago

Do we need to re-announce proof of dirty practices by Anthropic?

karmasimida - 5 hours ago

Unless they stop selling APIs to the public this can’t be stopped.

Mind you that nuclear weapons are able to be regulated not because the tech itself is secret, it is because the refining is nation state effort, that is impossible to go unnoticed.

Realistically, the more tokens they are selling, the harder they can control it

- 4 hours ago
[deleted]
UlisesAC4 - 4 hours ago

Antropic has too much to explain if their advantage can be closed with just black box distillation. And if it is being white version they have way worse things to take care.

maxglute - 5 hours ago

~650 messages per account? that seems either very little or too much. Surprised there isn't a coordinated distillation service with 5x accounts to spread the load.

int32_64 - 5 hours ago

The company that claims all knowledge workers are going to be wiped out by their technology is asking these future disenfranchised workers to care about the Chinese ripping off their tech. That seems like a hard no.

StarterPro - 5 hours ago

> Distillation is a widely used and legitimate training method.

Oh ok, so you can steal from everyone, but when they do it to you, its bad.

zb3 - 5 hours ago

> But foreign labs that illicitly distill American models can remove safeguards

I hope so, I don't need their "safeguards".

- 5 hours ago
[deleted]
ralph84 - 5 hours ago

Human knowledge belongs to humanity. Of course the people who want to paywall it and extract rent will try to concoct some ethical basis for their rent seeking. Anthropic appears to be choosing the xenophobic route.

ks2048 - 5 hours ago

I wonder how much American labs do the same.

sidgarimella - 5 hours ago

would sure be nice if the effort spent fighting their karma was pointed at a better frontier model

Imustaskforhelp - 4 hours ago

Edit: For what its worth, I don't really care about anthropic being scraped because they scrape and torrent illegally every book without any compensation and so many other things that they can't really use this moral card or any moral response at this point for the most part. Their scraping that directly leads to effectively some servers effectively getting Ddosed and so many other things.

Also actually, we all sort of knew this but its interesting to see Anthropic call out such companies in public.

I think that for providing models at 1/20th the cost and open sourcing it while sometimes being much more leaner is an overall win for most part for the general public whose data was questionably stolen by Anthropic and it seems that some court cases about these are still happening.

One of the more curious things I want to say is that Qwen and GLM 5 (Z.ai) are not in this.

Personally I love Kimi the most and maybe we will see in the future from more AI tech companies like chatgpt/google too if they have any proof of distillations as well.

But the fact that Z.ai isn't distilling makes me wonder what and how they are doing it. Qwen models although nice are not the best at the moment so I especially wonder what Z.ai model training does and where they get their training data.

I still love Kimi and I would probably use Kimi but I am interested to know more about the training sources of Z.ai

Also another point but given that Kimi and Qwen are quite tightly linked (Kimi aka moonshotAI is backed by Alibaba aka Qwen) [https://www.cnbc.com/2026/01/19/alibaba-backed-startup-moons...]

And qwen not being in here. Why didn't Qwen also share the data. Or could there be a fact where Kimi/moonshot trained on anthropic and also shared the data with Qwen/Alibaba too but the name of Qwen wasn't available in public ofc?

I can definitely see that being a possibility given that Kimi/Moonshot uses servers hosted on alibaba.

Interestingly for Z.ai I found a quick fact about them from Wikipedia:

In May 2024, the Saudi Arabian finance firm Prosperity7 Ventures, LLC participated in a USD $400 million financing round for Zhipu AI with a valuation of approximately 3 billion USD.

I want to know if z.ai does any large scale web scraping? Where does z.ai get from what I see 15T–28.5T tokens.

I saw this comment from an article:

Pre-training: On a 23T token dataset curated from diverse sources, with emphasis on high-quality data through techniques like SemDeDup and quality-tiered up-sampling.

I think I am interested in this rabbit-hole because if Anthropic has caught them. This will definitely impact the companies in future if Anthropic models get better and they might have to figure out the training data issue which Z.ai might've solved?

I am still extremely suspicious of Z.ai but perhaps someone who has the tech reach on twitter or any other platform (maybe simonw?) could ask them.

I think Z.ai guys are really open people especially within the research community yet I don't think I remember hearing about them intensively scraping as well while we consistently see posts about how American or even Chinese (Baidu most notoriously iirc) who basically DDOS a server/git-server etc.

What are the Z.ai team doing that they don't distill Anthropic, they don't create intensive scraping problems at the same time while still getting good quality data? Does seem to be too good to be true unless I am missing something which I think might be. So if anyone has the expertise, I would love to know more.

devnonymous - 5 hours ago

> These labs created over 24,000 fraudulent accounts and generated over 16 million exchanges with Claude

What exactly makes these accounts ^fraudulent^ ...did they not pay Anthropic for the service ?

ChrisArchitect - 5 hours ago

Some more discussion on source: https://www.anthropic.com/news/detecting-and-preventing-dist... (https://news.ycombinator.com/item?id=47126177)

catsquirrel28 - 4 hours ago

My guess is they're setting up a narrative to claim this whole AI bubble wasn't a giant grift and they would've been profitable if it weren't for those dang Chinese people distilling their models and giving them out for free.

akmarinov - 4 hours ago

So what? Should I announce that Anthropic has been trained on copyrighted material they stole?

rsynnott - 5 hours ago

Oh, now we care about IP, do we?

bakugo - 5 hours ago

Anthropic leadership once again showing off a remarkable level of immaturity.

Of course they don't want anyone else to use the precious outputs from the model they created by scraping data from the millions of fleshbag programmers they're now trying to put out of a job. They're just another corporation with the standard goal of making as much money as possible with little regard for anything else, so that much is expected.

But to actually write up a public announcement like this, loudly and proudly announcing to the world that they're crying at the daycare because their precious toy has been stolen by some kid, even though everyone around them knows they themselves originally stole that toy from another kid too, takes a special kind of corporate shamelessness that seems to be becoming more prevalent by the day.

gostsamo - 5 hours ago

"You are trying to kidnap what I have rightfully stolen, and I think it quite ungentlemanly."

- 5 hours ago
[deleted]
eagleinparadise - 5 hours ago

world's smallest violin meme

stefan_ - 5 hours ago

Anthropic, of course, ran an industrial-scale distillation attack on the combined works of human mankind. So, uh.. kindly go fuck yourself? Who asked?

- 5 hours ago
[deleted]
grezql - 5 hours ago

[dead]