TurboQuant: Redefining AI efficiency with extreme compression

research.google

421 points by ray__ 14 hours ago


amitport - 12 hours ago

This is a great development for KV cache compression. I did notice a missing citation in the related works regarding the core mathematical mechanism, though. The foundational technique of applying a geometric rotation prior to extreme quantization, specifically for managing the high-dimensional geometry and enabling proper bias correction, was introduced in our NeurIPS 2021 paper, "DRIVE" (https://proceedings.neurips.cc/paper/2021/hash/0397758f8990c...). We used this exact rotational approach and a similar bias correction mechanism to achieve optimal distributed mean estimation. I also presented this work and subsequent papers in a private invited talk at Google shortly after publication. Given the strong theoretical overlap with the mechanisms in TurboQuant and PolarQuant, I hope to see this prior art acknowledged in the upcoming camera-ready versions.

gavinray - 6 hours ago

Can someone ELI5 these two concepts please, which make no sense to me:

  > "TurboQuant starts by randomly rotating the data vectors. This clever step simplifies the data's geometry"
I don't understand how taking a series of data and applying a random rotation could mathemetically lead every time to "simpler" geometry.

If I throw a bunch of shapes on the ground, tightly packed and touching each other, then rotate all of them, you can't guarantee that the new conglomerate shape is any more/less "simple" than before, right?

  > "Johnson-Lindenstrauss Transform to shrink complex, high-dimensional data while preserving the essential distances and relationships between data points. It reduces each resulting vector number to a single sign bit (+1 or -1)."
How can a boolean value preserve all of the relational and positional information between data points?
akhenakh - 7 hours ago

Someone implementing it on llamacpp already https://github.com/mudler/llama.cpp/commit/dee102db1bfd723c9...

pstoll - 8 hours ago

And a group has published an independent working implementation today, nice to see:

https://github.com/tonbistudio/turboquant-pytorch

benob - 12 hours ago

This is the worst lay-people explanation of an AI component I have seen in a long time. It doesn't even seem AI generated.

Serhii-Set - 4 hours ago

Compression research keeps producing surprisingly practical results. The interesting parallel in image formats — AVIF and JPEG XL both came from video codec research (AV1 and JPEG committee respectively), and the compression gains translated almost directly. Makes me wonder how much of the current AI quantization work will eventually land in production inference the same way.

bilsbie - 7 hours ago

It seems like most breakthroughs I see are for efficiency? What are the most importsnt breakthroughs from the past two or three years for intelligence?

bluequbit - 13 hours ago

I did not understand what polarQuant is.

Is is something like pattern based compression where the algorithm finds repeating patterns and creates an index of those common symbols or numbers?

mmastrac - 6 hours ago

Is this a tradeoff between GPU-computation-expense vs accuracy? ie: you could quantize into segments or grids on the unit circle/sphere/etc, but that's too expensive so it's better to just quantize to a Cartesian grid because the GPU can decompress cheaper?

iddan - 6 hours ago

I am guessing as Google is vertically integrated and "actually pays" for AI infra (compared to OpenAI & Anthropic that receives hardware as partnerships) they have a more urgent incentive to reduce model sizes. Also, Google and Apple will be the first to gain from running model on-device

zeeshana07x - 10 hours ago

The gap between how this is described in the paper vs the blog post is pretty wide. Would be nice to see more accessible writing from research teams — not everyone reading is a ML engineer

ssijak - 8 hours ago

For my grug brain can somebody translate this to ELIgrug terms?

Does this mean I would be able to run 500b model on my 48gb macbook without loosing quality?

macleginn - 8 hours ago

"TurboQuant proved it can quantize the key-value cache to just 3 bits without requiring training or fine-tuning and causing any compromise in model accuracy" -- what do each 3 bits correspond to? Hardly individual keys or values, since it would limit each of them to 8 different vectors.

maurelius2 - 12 hours ago

I'm somewhat at a loss here other than understanding the fundamentals. Can someone tell me how the compression impact performance?

lwhi - 7 hours ago

Will this help us run models locally?

moktonar - 12 hours ago

Aren’t polar coordinates still n-1 + 1 for radius for n-dim vector? If so I understand that angles can be quantized better but when radius r is big the error is large for highly quantized angles right? What am I missing?

lucrbvi - 10 hours ago

Sounds like Multi-Head Latent Attention (MLA) from DeepSeek

_s_a_m_ - 6 hours ago

has the word "advanced", gotta be good

naasking - 6 hours ago

This sounds great! TurboQuant does KV cache compression using quantization via rotations, and ParoQuant [1] does weight compression using quantization via rotations! So we can get 4-bit weights that match bf16 precision, the KV cache goes down to 3 bits per key. This brings larger models and long contexts into the range of "possibly runnable" on beefy consumer hardware.

[1] https://github.com/z-lab/paroquant

Yanko_11 - an hour ago

[dead]

maxothex - 3 hours ago

[dead]

diablevv - 5 hours ago

[dead]

ryguz - 4 hours ago

[dead]

pugchat - 8 hours ago

[dead]

leontloveless - 7 hours ago

[dead]

wei03288 - 6 hours ago

[dead]

QubridAI - 4 hours ago

[dead]

veunes - 10 hours ago

[dead]

aledevv - 10 hours ago

[dead]

rsmtjohn - 11 hours ago

[flagged]

mohsen1 - 11 hours ago

[dead]

dev_tools_lab - 9 hours ago

[dead]

hikaru_ai - 12 hours ago

[dead]

paxrel_ai - 6 hours ago

[dead]

vaildegraff - 7 hours ago

[flagged]

mskkm - 10 hours ago

Pied Piper vibes. As far as I can tell, this algorithm is hardly compatible with modern GPU architectures. My guess is that’s why the paper reports accuracy-vs-space, but conveniently avoids reporting inference wall-clock time. The baseline numbers also look seriously underreported. “several orders of magnitude” speedups for vector search? Really? anyone has actually reproduced these results?