Show HN: Llama 3.1 70B on a single RTX 3090 via NVMe-to-GPU bypassing the CPU

github.com

364 points by xaskasdf a day ago


Hi everyone, I'm kinda involved in some retrogaming and with some experiments I ran into the following question: "It would be possible to run transformer models bypassing the cpu/ram, connecting the gpu to the nvme?"

This is the result of that question itself and some weekend vibecoding (it has the linked library repository in the readme as well), it seems to work, even on consumer gpus, it should work better on professional ones tho

MarcLore - 12 hours ago

Bypassing CPU for NVMe-to-GPU transfer is clever. The bottleneck for running large models locally has always been the memory hierarchy — this essentially treats NVMe as extended VRAM with direct DMA.

I wonder how this compares to Apple's unified memory approach on M-series chips for similar workloads. The M4 Max can fit 70B models entirely in memory without any offloading tricks, though at lower throughput than a 3090.

Would be interesting to see comparative benchmarks: this NVMe approach on a 3090 vs M4 Max native, especially for batch inference where the NVMe latency might be amortized.

01100011 - 20 hours ago

Yeah, GPUdirect should allow you to dma straight to a storage device.

I wonder... what if the m.2 storage was actually DRAM? You probably don't need persistence for spilling a model off the GPU. How would it fare vs just adding more host memory? The m.2 ram would be less flexible, but would keep the system ram free for the CPU.

randomtoast - a day ago

0.2 tok/s is fine for experimentation, but it is not interactive in any meaningful sense. For many use cases, a well-quantized 8B or 13B that stays resident will simply deliver a better latency-quality tradeoff

umairnadeem123 - 19 hours ago

0.2 tok/s is slow for chat but perfectly fine for batch/async workloads. I run automated content generation pipelines where a single job kicks off dozens of LLM calls (script generation, metadata, descriptions) and none of them need to be interactive. The whole job takes 20 minutes anyway because of image generation bottlenecks. Being able to run a 70B model locally for those batch calls instead of paying per-token API costs would be a significant cost reduction, even at this speed.

jacquesm - 21 hours ago

This is an interesting area for experiments. I suspect that in the longer term model optimization (knowing which bits you can leave out without affecting the functioning of the model) will become the dominant area of research just like it did with compression algorithms because effectively a model is a lossy compression scheme.

And that's good because that increases democratization of AI away from the silos that are being created.

rl3 - a day ago

Nice. I've been looking at doing something similar, more on the order of running a 1T model with less than half the available VRAM.

One workup indicated it was theoretically possible to modify a piece of SGLang's routing layer to support JIT predict-ahead expert swaps from Gen5 NVMe storage straight into GPU memory.

I'm hoping that proves true. The setup relies on NVIDIA Dynamo, so NIXL primitives are available to support that.

Curious if anyone's tried this already.

civicsquid - 17 hours ago

Really cool. I'm wondering: what background did you need to be able to think of the question that resulted in this project?

I know you said you're involved in some retrogaming and were experimenting, but as someone who works in a world where hardware is pretty heavily abstracted away, even if I got into retrogaming I don't know that I'd consider that there may be a systems improvement lying around. Beyond the creative aspect, it feels like there is some systems and hardware background that helped put the idea together (and I'd be interested to go learn about of that systems/hardware knowledge myself).

davideom0414 - 10 hours ago

Really interesting experiment i should have done this before Do you have numbers on effective throughput vs PCIe theoretical bandwidth? I’m curious whether this is primarily latency-bound or bandwidth-bound in practice Can some tell me??

7777777phil - 11 hours ago

Cool hack but 0.5 tok/s on 70B when a 7B does 30+ on the same card. NVIDIA's own research says 40-70% of agentic tasks could run on sub-10B models and the quality gap has closed fast.

Wuzado - a day ago

I wonder - could this be used for multi-tier MoE? Eg. active + most used in VRAM, often used in RAM and less used in NVMe?

Aurornis - 17 hours ago

Cool project. Can you provide more details about your DKMS patching process for consumer GPUs? This would be fun to try out, but I’d need some more details on that patch process first.

throwaway2027 - a day ago

Didn't DirectX add an API for loading assets directly to GPU memory? Would that work?

exabrial - a day ago

I feel like we need an entirely new type of silicon for LLMs. Something completely focused on bandwidth and storage probably at the sacrifice of raw computation power.

stuaxo - 11 hours ago

Interesting. Can AMD GPUs do direct io like this?

spwa4 - 13 hours ago

I've often wondered doing this with extreme compression. What if you did extreme compression + decompression on the GPU? Because you're leaving a lot of compute unused.

sylware - 14 hours ago

Isn't that linux DMA buf?

timzaman - 16 hours ago

Umm sorry but the cpu can easily keep up shuttling around to/from your nvme. Especially ancient gen3 pcie. Not sure why ud do this.

jauntywundrkind - a day ago

Could be neat to see what giving the 8b like 6gb ram instead of 10gb. Something in-between, where you still need NVMe, but not like the 3x ratio of the 70b model on 23GB.

Nice work. PCI-P2P (GPU-Direct (tm)) is such great stuff. Cool to see!

johnbarron - 3 hours ago

[dead]

builderhq_io - 8 hours ago

[dead]

ai_hack3r - 7 hours ago

[dead]

dhjjdjjjd - 16 hours ago

[flagged]

turingsroot - 18 hours ago

[flagged]