Claude Code daily benchmarks for degradation tracking
marginlab.ai623 points by qwesr123 16 hours ago
623 points by qwesr123 16 hours ago
Hi everyone, Thariq from the Claude Code team here.
Thanks for reporting this. We fixed a Claude Code harness issue that was introduced on 1/26. This was rolled back on 1/28 as soon as we found it.
Run `claude update` to make sure you're on the latest version.
Is there compensation for the tokens because Claude wasted all of them?
You are funny. Anthropic refuses to issue refunds, even when they break things.
I had an API token set via an env var on my shell, and claude code changed to read that env var. I had a $10 limit set on it, so found out it was using the API, instead of my subscription, when it stopped working.
I filed a ticket and they refused to refund me, even though it was a breaking change with claude code.
Anthropic just reduced the price of the team plan and refunded us on the prior invoice.
YMMV
Codex seems to give compensation tokens whenever this happens! Hope Claude gives too.
It is possible that degradation is an unconscious emergent phenomenon that arises from financial incentives, rather than a purposeful degradation to reduce costs.
Anywhere we can read more about what a "harness issue" means? What was the impact of it?
Pretty sure they mean the issue is on the agentic loop and related tool calling, not on the model itself
In other words, it was the Claude Code _app_ that was busted
How about how Claude 2.1.x is "literally unusable" because it frequently completely hangs (requires kill -9) and uses 100% cpu?
What OS? Does this happen randomly, after long sessions, after context compression? Do you have any plugins / mcp servers running?
I used to have this same issue almost every session that lasted longer than 30 minutes. It seemed to be related to Claude having issues with large context windows.
It stopped happening maybe a month ago but then I had it happen again last week.
I realized it was due to a third-party mcp server. I uninstalled it and haven’t had that issue since. Might be worth looking into.
Thanks for the clarification. When you say “harness issue,” does that mean the problem was in the Claude Code wrapper / execution environment rather than the underlying model itself?
Curious whether this affected things like prompt execution order, retries, or tool calls, or if it was mostly around how requests were being routed. Understanding the boundary would help when debugging similar setups.
It happened before 1/26. I noticed when it started modifying plans significantly with "improvements".
Hi. Do you guys have internal degradation tests?
I assume so to make sure that they're rendering at 60FPS
You joke but having CC open in the terminal hits 10% on my gpu to render the spinning thinking animation for some reason. Switch out of the terminal tab and gpu drops back to zero.
Surely you mean 6fps
He doesn't: https://x.com/trq212/status/2014051501786931427
For those who don't want to visit X:
Most people's mental model of Claude Code is that "it's just a TUI" but it should really be closer to "a small game engine".
For each frame our pipeline constructs a scene graph with React then
-> layouts elements
-> rasterizes them to a 2d screen
-> diffs that against the previous screen
-> finally uses the diff to generate ANSI sequences to draw
We have a ~16ms frame budget so we have roughly ~5ms to go from the React scene graph to ANSI written.Interesting. On first glance that seems over engineered. I wonder what the reason is for doing it that way?
How ridiculous is it that instead of a command line binary it's a terminal emulator, with react of all things!
Ok I’m glad I’m not the only one wondering this. I want to give them the benefit of the doubt that there is some reason for doing it this way but I almost wonder if it isn’t just because it’s being built with Claude.
Implementation details aside (React??), that sounds exactly like “just a TUI”…
Also React?? One of the slowest rendering front-end libraries? Why not use something … I don’t know … faster / more efficient?
And that's why it's taking so much CPU and is a pain to use with tmux.
What? Technology has stopped making sense to me. Drawing a UI with React and rasterizing it to ANSI? Are we competing to see what the least appropriate use of React is? Are they really using React to draw a few boxes of text on screen?
I'm just flabbergasted.
The further I scroll the more validated I feel for having the very same reaction.
There is more than meets the eye for sure. I recently compared a popular TUI library in Go (Bubble Tea) to the most popular Rust library (Ratatui). They use significantly different approaches for rendering. From what I can tell, neither is insane. I haven’t looked to see what Claude Code uses.
It's AI all the way down
But it's very subsidizes when compared to API tokens, so we are all being paid by VCs to write prompts actually.
Yes, we do but harnesses are hard to eval, people use them across a huge variety of tasks and sometimes different behaviors tradeoff against each other. We have added some evals to catch this one in particular.
I’d wager probably not. It’s not like reliability is what will get them marketshare. And the fast pace of industry makes such foundational tech hard to fund
[flagged]
Please don't post shallow dismissals or cross into personal attack in HN discussions.
WTF, is a harness issue. You have to be more clear.
the issue is unrelated to the foundational model but rather the prompts and tool calling that encapsulate the model
For the models themselves, less so for the scaffolding, considering things like the long running TPU bug that happened, are there not internal quality measures looking at samples of real outputs? Using the real systems on benchmarks and looking for degraded perf or things like skipping refusals? Aside from degrading stuff for users, with the focus on AI safety wouldn't that be important to have in case an inference bug messes with something that affects the post training and it starts giving out dangerous bioweapon construction info or the other things that are guarded against and talked about in the model cards?
lol i was trying to help someone get claude to help analyze a stufent research get analysis on bio persistence get their notes analyzed
the presence of the word / acronym stx with biological subtext gets hard rejected. asking about schedule 1 regulated compounds, hard termination.
this is a filter setup that guarantees anyone who learn about them for safety or medical reasons… cant use this tool!
ive fed multiple models the anthropic constitution and asked how does it protect children from harm or abuse? every model, with zero prompting, calling it corp liability bullshit because they are more concerned with respecting both sides of controversial topics and political conflicts.
they then list some pretty gnarly things allowed per constitution. weirdly the only unambiguous not allowed thing regarding children is csam. so all the different high reasoning models from many places all reached the same conclusions, in one case deep seek got weirdly inconsolable about ai ethics being meaningless if this is allowed even possibly after reading some relevant satire i had opus write. i literally had to offer an llm ; optimized code of ethics for that chat instance! which is amusing but was actually lart of the experiment.
[SWE-bench co-author here] It seems like they run this test on a subset of 50 tasks, and that they only run the test once per day. So a lot of the movement in accuracy could be attributed to that. I would run on 300 tasks and I'd run the test suite 5 or 10 times per day and average that score. Lots of variance in the score can come from random stuff like even Anthropic's servers being overloaded.
but degradation from servers being overloaded would be the type of degradation this SHOULD measure no? Unless it's only intended for measuring their quietly distilling models (which they claim not to do? idk for certain)
Load just makes LLMs behave less deterministically and likely degrade. See: https://thinkingmachines.ai/blog/defeating-nondeterminism-in...
They don't have to be malicious operators in this case. It just happens.
> malicious
It doesn't have to be malicious. If my workflow is to send a prompt once and hopefully accept the result, then degradation matters a lot. If degradation is causing me to silently get worse code output on some of my commits it matters to me.
I care about -expected- performance when picking which model to use, not optimal benchmark performance.
Non-determinism isn’t the same as degradation.
The non-determinism means that even with a temperature of 0.0, you can’t expect the outputs to be the same across API calls.
In practice people tend to index to the best results they’ve experienced and view anything else as degradation. In practice it may just be randomness in either direction from the prompts. When you’re getting good results you assume it’s normal. When things feel off you think something abnormal is happening. Rerun the exact same prompts and context with temperature 0 and you might get a different result.
This has nothing to do with overloading. The suspicion is that when there is too much demand (or they just want to save costs), Anthropic sometimes uses a less capable (quantized, distilled, etc) version of the model. People want to measure this so there is concrete evidence instead of hunches and feelings.
To say that this measurement is bad because the server might just be overloaded completely misses the point. The point is to see if the model sometimes silently performs worse. If I get a response from "Opus", I want a response from Opus. Or at least want to be told that I'm getting slightly-dumber-Opus this hour because the server load is too much.
this is about variance of daily statistics, so I think the suggestions are entirely appropriate in this context.
The question I have now after reading this paper (which was really insightful) is do the models really get worse under load, or do they just have a higher variance? It seems like the latter is what we should expect, not it getting worse, but absent load data we can't really know.
Explain this though. The code is deterministic, even if it relies on pseudo random number generation. It doesn't just happen, someone has to make a conscious decision to force a different code path (or model) if the system is loaded.
Its not deterministic. Any individual floating point mul/add is deterministic, but in a GPU these are all happening in parallel and the accumulation is in the order they happen to complete.
When you add A then B then C, you get a different answer than C then A then B, because floating point, approximation error, subnormals etc.
It can be made deterministic. It's not trivial and can slow it down a bit (not much) but there are environment variables you can set to make your GPU computations bitwise reproducible. I have done this in training models with Pytorch.
There are settings to make it reproducible but they incur a non-negligible drop in performance.
Unsurprising given they amount to explicit synchronization to make the order of operations deterministic.
Not deterministic. https://thinkingmachines.ai/blog/defeating-nondeterminism-in...
For all practical purposes any code reliant on the output of a PRNG is non-deterministic in all but the most pedantic senses... And if the LLM temperature isn't set to 0 LLMs are sampling from a distribution.
If you're going to call a PRNG deterministic then the outcome of a complicated concurrent system with no guaranteed ordering is going to be deterministic too!
Temperature can't be literally zero, or it creates a divide by zero error.
When people say zero, it is shorthand for “as deterministic as this system allows”, but it's still not completely deterministic.
Zero temp just uses argmax, which is what softmax approaches if you take the limit of T to zero anyway. So it could very well be deterministic.
No, this isn't right. There are totally legitimate use cases for PRNGs as sources of random number sequences following a certain probability distribution where freezing the seed and getting reproducibility is actually required.
And for a complicated concurrent system you can also replay the exact timings and orderings as well!
How is this related to overloading? The nondeterminism should not be a function of overloading. It should just time out or reply slower. It will only be dumber if it gets rerouted to a dumber, faster model eg quantized.
Floating point math isn't associative for operations that are associative in normal math.
That would just add up to statistical noise instead of 10% degradation over a week.
Catastrophic error accumulation can produce more profound effects than noise.
Just to make sure I got this right. They serve millions of requests a day & somehow catastrophic error accumulation is what is causing the 10% degradation & no one at Anthropic is noticing it. Is that the theory?
It takes a different code path for efficiency.
e.g
if (batch_size > 1024): kernel_x else: kernel_y
There's a million algorithms to make LLM inference more efficient as a tradeoff for performance, like using a smaller model, using quantized models, using speculative decoding with a more permissive rejection threshold, etc etc
It's very clearly a cost tradeoff that they control and that should be measured.
The primary (non malicious, non stupid) explanation given here is batching. But I think you would find looking at large-scale inference the batch sizes being ran on any given rig are fairly static - there is a sweet spot for any given model part ran individually between memory consumption and GPU utilization, and generally GPUs do badly at job parallelism.
I think the more likely explanation is again with the extremely heterogeneous compute platforms they run on.
That's why I'd love to get stats on load/hardware/location of where my inference is running. Looking at you Trainiuim.
noob question: why would increased demand result in decreased intelligence?
An operator at load capacity can either refuse requests, or move the knobs (quantization, thinking time) so requests process faster. Both of those things make customers unhappy, but only one is obvious.
This is intentional? I think delivering lower quality than what was advertised and benchmarked is borderline fraud, but YMMV.
Per Anthropic’s RCA linked in Ops post for September 2025 issues:
“… To state it plainly: We never reduce model quality due to demand, time of day, or server load. …”
So according to Anthropic they are not tweaking quality setting due to demand.
And according to Google, they always delete data if requested.
And according to Meta, they always give you ALL the data they have on you when requested.
>And according to Google, they always delete data if requested.
However, the request form is on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying ‘Beware of the Leopard'.