Defeating Nondeterminism in LLM Inference

thinkingmachines.ai

323 points by jxmorris12 3 days ago


lsy - 3 days ago

Fixing "theoretical" nondeterminism for a totally closed individual input-output pair doesn't solve the two "practical" nondeterminism problems, where the exact same input gives different results given different preceding context, and where a slightly transformed input doesn't give a correctly transformed result.

Until those are addressed, closed-system nondeterminism doesn't really help except in cases where a lookup table would do just as well. You can't use "correct" unit tests or evaluation sets to prove anything about inputs you haven't tested.

daralthus - 3 days ago

I thought this was pretty well known (at least in the JAX/XLA world). I've hit this many times and got batch variance explained to me before: https://github.com/google-deepmind/penzai/issues/82 and https://github.com/jax-ml/jax/issues/20047#issuecomment-1975...

dns_snek - 2 days ago

Why do you care about determinism in a probabilistic system? What difference does it make to the end user if the input "How do I X?" always produces the same deterministic output when semantically equivalent inputs "how do i x?", "how do I x", and "how do I X??" are bound to produce different answers that often won't even be semantically equivalent.

What LLMs need is the ability to guarantee semantically-equivalent outputs for all semantically-equivalent inputs, but that's very different from "determinism" as we understand it from other algorithms.

jll29 - 3 days ago

Sometimes, the reason for non-determinism is implementation-specific. For instance, in GPT-2's source code (I haven't checked other model versions), setting the temperature in the GUI does not lead to a value of 0 but "epsilon" (a very small value larger than 0), to avoid a division by zero error in the code, which makes sense.

For many applications, non-determinism implies "useless". This has been a long standing issue with LDA topic models. In particular in the legal, financial and regulatory domains, if a method is not deterministic, it may be illegal to use it or it may lead to follow-on requirements that one does not want (e.g. all screens shown to humans must be preserved to be able to go back and reconstruct what exactly happened to a particular user in a particular second).

nakamoto_damacy - 3 days ago

"in collaboration with others at Thinking Machines"

If you're old enough, you might remember Danny Hillis' Thinking Machines from the late 80s. I wish they had chosen a different name (I say this for nostalgic reasons, having been in front of one of those cubes glowing with red LEDs back in the late 80s at MIT's AI Lab" (renamed to CSAIL at some point). Feynman did some amazing work on that, too: https://longnow.org/ideas/richard-feynman-and-the-connection...

In the U.S., the “THINKING MACHINES” trademarks were owned by Thinking Machines Corporation (the company Hillis co-founded), not Hillis personally, and those registrations were cancelled in 1998–1999. USPTO Report +1

The company itself went bankrupt in 1994 and its assets were dispersed (e.g., to Sun Microsystems, later Oracle).

There’s a new, pending USPTO application for “THINKING MACHINES” filed in 2025 by Thinking Machines Lab Inc., the company founded by Amira Murati.

jasonjmcghee - 3 days ago

I love high quality blog post style research discussion - Anthropic has been leading the charge with this recently and it's great to see it spreading. OpenAI was also doing this during all the RL research days.

riazrizvi - 3 days ago

Natural language is ambiguous. It needs to be. I think the approach here of trying to figure out how to make circles into squares, and argue why circles should be squares, is misguided.

Discussions of this type are going to eventually morph into better understanding of how to accept ambiguity and randomness in language, and further shape it with other larger sub-patterns beyond the little proto-grammars that the QKV projection matrices extract.

andy99 - 16 hours ago

Deterministic reproducibility is very different from replicability, and imo the latter is more important; even if the details of the reproducibility are interesting I think they're irrelevant.

There's a similar situation in other scientific disciplines. People want source code and data so they can reproduce results - that basically tells you someone didn't cheat and they documented everything. But it does not tell you whether a real phenomenon was observed.

It's much more interesting to know if roughly the same cause and effect relationships exist so we can predict behavior.

Concretely, there are studies that show e.g. randomly capitalizing letters can lead to completely different responses from and LLM. That speaks to a fragility that doesn't have anything to do with deterministic reproduction.

gond - 3 days ago

I am still irritated by the name of the company.

What is the reasoning behind these schemes? The hope that bits of the properties of legendary companies will rub off onto the new venture?

As if naming the next best venture PARC will inevitably create a breakthrough in networking just by the arrangement of four letters.

menaerus - 2 days ago

> But why aren’t LLM inference engines deterministic? One common hypothesis is that some combination of floating-point non-associativity and concurrent execution leads to nondeterminism based on which concurrent core finishes first. We will call this the “concurrency + floating point” hypothesis for LLM inference nondeterminism. For example, a recent arXiv preprint writes

I'm honored to see that Mira and co. appreciated my feedback on the very topic I made 7 months ago here :D

> You don't need RNG since the whole transformer is an extremely large floating-point arithmetic unit. A wild guess - how about the source of non-determinism is coming from the fact that, on the HW level, tensor execution order is not guaranteed and therefore (T0 * T1) * T2 can produce slightly different results than T0 * (T1 * T2) due to rounding errors?

https://news.ycombinator.com/item?id=42952605#42960047

syntaxing - 3 days ago

Super interesting. For those unaware, this is the company Mira Murati (OpenAI previous CTO) started

mg - 3 days ago

I really hope we will get deterministic LLMs in the future. Even if it causes slightly slower response times.

Nondeterminism is what currently keeps me from working with other developers.

As I wrote in "Prompt Coding" [1], these days I am not looking for good code. I am looking for prompts that create good code. But how do you share prompts among developers when they produce different code every time? You cannot simply state "Here, I found a prompt that makes gpt-5-2025-08-07 output a solution with all the desired attributes".

Similar with images. At the moment, for most image models, you cannot outsource the task of writing prompts that create the desired images. Because most image models will not create the same image when given the same prompt and parameters.

[1]: https://www.gibney.org/prompt_coding

kybernetikos - 3 days ago

For fun over the last few days, I've built a compressor / decompressor that uses the logits from an LLM, for each token in the input, then takes the ranks and exponential goolomb encodes them. Then you work in reverse to regenerate the original

It took me ages to get the prediction for the second token after "hello" to match the same as the prediction for the second token when running the model on the string "hello world", despite the fact that I was using a causal model. I tried all kinds of things before discovering that `quantized: false` was the important setting.

- 2 days ago
[deleted]
eldenring - 3 days ago

Very impressive! I guess this still wouldn't affect their original example

> For example, you might observe that asking ChatGPT the same question multiple times provides different results.

even with 0.0 temperature due to MOE models routing at a batch level, and you're very unlikely to get a deterministic batch.

> Not because we’re somehow leaking information across batches — instead, it’s because our forward pass lacks “batch invariance”, causing our request’s output to depend on the batch size of our forward pass.

The router also leaks batch-level information across sequences.

gajjanag - 2 days ago

As others have pointed out, these phenomena are well known to many folks across companies in the AI infra space. It doesn't really break new ground. This article is a good exposition of the basic strategies though.

What I would have loved is a discussion around collectives/multi-node setups. And showing how to get determinism at low performance penalty for multi-node reduction collectives.

orbital-decay - 2 days ago

By setting the temperature to 0 you get greedy decoding, which does a lot more than just making it predictable, and can degrade outputs. Random sampling exists for a reason! Gemini 2.5 Pro in particular doesn't like temp 0, for example.

Focus on correctness, not determinism.

quantum_state - 3 days ago

As the bottom of LLM inference, it is sampling for the next token based on the probability distribution conditioned on the tokens currently in the context window. If the distribution exhibits degeneracy in probability for more than token, outcome of the sampling will naturally, as it should, be nondeterministic. It should be left alone.

cubefox - 3 days ago

His solution still relies on greedy (temperature 0) sampling, which is probably not optimal for model performance on various tasks. For example, Gemini 2.5 uses temperature 1 by default. But deterministic inference with temperature >0 can still be achieved by using pseudorandom sampling with a fixed seed.

measurablefunc - 3 days ago

I think this means that the results might also be non-deterministic across hardware revisions b/c I don't think they verified that the kernels will work the same on different GPU & TPU versions b/c how do they know that the compiler will not re-order the operations behind their back?

bee_rider - 3 days ago

From their code:

    A = torch.randn(2048, 2048, device='cuda', dtype=torch.bfloat16)
    B = torch.randn(2048, 2048, device='cuda', dtype=torch.bfloat16)
    ref = torch.mm(A, B)
    for _ in range(1000):
         assert (torch.mm(A, B) - ref).abs().max().item() == 0
I’m sort of surprised that Torch doesn’t have some kind of lazy evaluation thing to avoid computing anything here. I thought that was one of the nice things about all these fancy frameworks (if I wanted the computer to actually do silly things when I asked it to, I would use BLAS directly, right?).
simne - 2 days ago

This is eternal struggle. - Hardware developers will constantly scale horizontally and make less (time) deterministic hardware, because wall of memory, and scientists could constantly develop new ways to make calculations deterministic.

So, even if will be achieved progress just now, I think in predictable future this will be constant dead-end.

themeiguoren - 2 days ago

A bit off topic from the technical discussion but does anyone recognize what blog layout or engine this is? I really like the layout with sidenotes and navigation.

Noumenon72 - 2 days ago

Are the results of the matmuls really that far apart in size that you have to lose significant bits when adding them up at FP32?

paulbjensen - 3 days ago

It reminded me of this wonderful talk by the late Joe Armstrong (Erlang's creator): https://www.youtube.com/watch?v=lKXe3HUG2l4

Great post.

PeterStuer - 2 days ago

THANK YOU! Great work and writeup. Hope it finally silences the "concurrency + floating point" crowd and the "LLMs can never be deterministic" zealots.

zacksiri - 2 days ago

This work is extremely consequential. When building agentic systems determinism will significantly improve the reliability.

I hope all the model providers adopt this.

bendoy - 3 days ago

Where this gets really complicated is when you are chaining many LLM calls together (basically any agent). A slight deviation in the call stack can throw off everything else.

lrvick - 3 days ago

Job one is have every bit of software involved also be deterministic, which stagex takes care of.

I had no problem getting deterministic LLM outputs when I experimented with this 6 months ago.

Run two of these with the same prompts and same seed and you get the same results.

Obviously in GPU clusters with different hardware things get more complicated.

https://git.distrust.co/public/llmshell

threeducks - 3 days ago

It should also be noted that PyTorch has a page about reproducibility: https://docs.pytorch.org/docs/stable/notes/randomness.html

TL;DR

Seed your PRNGs and call torch.use_deterministic_algorithms(True) to get the deterministic kernels. They may be slightly slower, but in practice, you probably will not notice.

Note that results will still differ between different drivers and GPUs. It would be great if NVIDIA tried harder in that regard.

reasonableklout - 2 days ago

Some great discussion on twitter: https://x.com/thinkymachines/status/1965826369721623001

Seems a buried lede is that on-policy RL is unlocked by bitwise identical results between training and sampling. I'm not an expert here but my understanding is that this would allow for stronger guarantees about deployment/training alignment for the RL training that the labs already do.

I don't fully understand the BigMath example though. They show that off-policy RLVR requires off-policy correction, which avoids divergence, but is suboptimal because it results in noisy rewards. Then they say "we fixed the sampler and trainer numerical mismatch, which allows for on-policy RL, look how much better it is." It's not clear to me whether this is an artificial example that deliberately uses different trainer/sampler setups, or if it's actually impossible to have the same numerics between trainer/sampler without their fixes (even if we use same batch size, no atomics, etc.).

htrp - 3 days ago

We know what thinking machines does yet?

emharsha1812 - 2 days ago

I think this is an excellent article which addresses the issue that I personally have been thinking about a long time. And no its not just some slop they put but actual an engineering blog(with open source code and reproducible results!) I think the company is off to a good start

sudohalt - 3 days ago

cool project but if this is what you are producing with $2 billion funding, i doubt you will survive. This is the type of article a grad student would write over a weekend.

unit149 - 2 days ago

[dead]

TNDnow - 3 days ago

Who needs a working product when you can spend all day designing the most WEWORK looking website and slap some pseud slop on it. It's like crypto "startups" but it's not even fun.

nowittyusername - 3 days ago

I am baffled that I still run against these statement years after LLM's have been around. LLM's are deterministic and always have been. The reason people are having issues with them is because they are basing their assumptions on api based experiments. Like my man, how can you be making these statements when you haven't done the due diligence of running the LLM on your own hardware with all of the variables locked down and accounted for? If you do just that it would become obviously clear that they are deterministic and most of the time the reason you see the non deterministic behavior is because you have not controlled for a variable. Usually prompt caching, batch processing or some other obvious variable. Now this is related to within same system deterministic behavior. You might get different answers when running on a different gpu, but at least for same systems the behavior is 100% identical if you account for all server startup flags and properly account for things like prompt cashing, slot contamination etc...