Surpassing vLLM with a Generated Inference Stack

infinity.inc

37 points by lukebechtel 18 hours ago


ntonozzi - 15 hours ago

Why do they need to run benchmarks to confirm performance? Can't they run an example prompt and verify they get the exact same output token probabilities for all prompts? The fact that they are not doing this makes me suspicious that they are in fact not doing the exact same thing as vLLM.

It is also a bit weird that they are not incorporating speculative decoding, that seems like a critical performance optimization, especially for decode heavy workloads.

2001zhaozhao - 3 hours ago

Every example like this makes it obvious that you can now use ML-like optimization approaches on well-specified, very-well-tested software problems with a clear optimization goal. Keep if it improves the objective while maintaining correctness, discard if it doesn't. AI-descent strikes again.

Maybe I should learn more about ML to have a better instinct on optimization methods in general, so I can actually build AI optimizers like these.

storus - 12 hours ago

Does it support paged attention like vLLM though? Without that they will run into memory fragmentation quickly.

rfw300 - 15 hours ago

OK... we need way more information than this to validate this claim! I can run Qwen-8B at 1 billion tokens per second if you don't check the model's output quality. No information is given about the source code, correctness, batching, benchmark results, quantization, etc. etc. etc.

acuozzo - 13 hours ago

Luke: Do you have benchmarks for BF16?

cermicelli - 9 hours ago

Dumb shit this says nothing you are saying x is better and there is no way to check or look into what it does and how it works or if it didn't just clone vllm code because why not atleast c compiler claude wrote was the verifiable kind of shit.

This is plain bullshit.