Show HN: Run TRELLIS.2 Image-to-3D generation natively on Apple Silicon
github.com138 points by shivampkumar 7 hours ago
138 points by shivampkumar 7 hours ago
I ported Microsoft's TRELLIS.2 (4B parameter image-to-3D model) to run on Apple Silicon via PyTorch MPS. The original requires CUDA with flash_attn, nvdiffrast, and custom sparse convolution kernels: none of which work on Mac.
I replaced the CUDA-specific ops with pure-PyTorch alternatives: a gather-scatter sparse 3D convolution, SDPA attention for sparse transformers, and a Python-based mesh extraction replacing CUDA hashmap operations. Total changes are a few hundred lines across 9 files.
Generates ~400K vertex meshes from single photos in about 3.5 minutes on M4 Pro (24GB). Not as fast as H100 (where it takes seconds), but it works offline with no cloud dependency.
https://github.com/shivampkumar/trellis-mac
Nice work. Although this model is not very good, I tried a lot of different image-to-3d models, the one from meshy.ai is the best, trellis is in the useless tier, really hope there could be some good open source models in this domain. Meshy is indeed great but I am terminally put off by their alltogether terrible, sleazy, gamified, opaque web UI. It's like aliexpress and a lootbox game had a baby that's into mesh generation. Ugh. Hey, thanks for sharing this. I'm sure TRELLIS.2 definitely has room to improve, especially on texturing. From what I've seen personally, and community benchmarks, it does fair on geometry and visual fidelity among open-source options, but I agree it's not perfect for every use case. Meshy is solid, I used it to print my girlfriend a mini 3d model of her on her birthday last year! Though worth noting it's a paid service, and free tier has usage limitations while TRELLIS.2 is MIT licensed with unlimited local generation. Different tradeoffs for different workflows. Hopefully the open-source side keeps improving. How much RAM does this use? Only sitting on 8 GB right now, I'm trying to figure out if I should buy 24 GB when it's time for a replacement or spring for 32. The model needed about 15GB at peak during generation - the 4B model loads multiple sub-models (1.3B each for shape and texture flow). 8GB won't be enough, but both 24GB and 32GB both should be fine. Thanks! Could it conceivably load the sub-models in series rather than parallel? 8 still won't be enough but I wonder if those with 16 could eke something out. So much effort, but no examples in the landing page. You're right, thanks for flagging this, let me run something and push images That’s always been possible with MPS backend, the reason people choose to omit it in HF spaces/demos is that HF doesn’t offer an MPS backend. People would rather have the thing work at best speeds than 10x worse speeds just for compatibility. IMO TRELLIS.2 is slightly different case from the HF models scenario. It depends on five compiled CUDA-only extensions -- flex_gemm for sparse convolution, flash_attn, o_voxel for CUDA hashmap ops, cumesh for mesh processing, and nvdiffrast for differentiable rasterization. These aren't PyTorch ops that fall back to MPS -- they're custom C++/CUDA kernels. The upstream setup.sh literally exits with "No supported GPU found" if nvidia-smi isn't present. The only reason I picked this up because I thought it was cool and no one was working on this open issue for Silicon back then (github.com/microsoft/TRELLIS.2/issues/74) requesting non-CUDA support. Are you saying the original one worked with MPS? Or are you just saying it was always theoretically possible to build what OP posted? It’s always been possible, but it’s not possible because there’s no backend, and no one wants to it to be possible because everyone needs it 10x the speed of running on a Mac? I’m missing something, I think. I thought it was cool and then I found the open issue mentioned above, that convinced me its def something more people want. It IS significantly slower, about 3.5 minutes on my MacBook vs seconds on an H100. That's partly the pure-PyTorch backend overhead and partly just the hardware difference. For my use case the tradeoff works -- iterate locally without paying for cloud GPUs or waiting in queues. Well done rad. how long does output take? trellis is a fun model. i was able to get it in 3.5 mins from a single image on my 24gb m4 pro macbook I'm still working on this to try to replicate nvdiffrast better. Found an open source port, might look it tonight [flagged] I mean I can see that it's niche. Did not expect so many upvotes, but ig it's less niche than I tought If you're not working with 3D on Apple Silicon this isn't relevant to you. For the
subset of people who are, running this 4B parameter 3D generation model locally on a Mac was previously blocked by hard CUDA dependencies with no workaround. Right but it is at most a couple of hours with claude code and posted on Sunday night.
gondar - 4 hours ago
isoprophlex - 5 minutes ago
shivampkumar - 4 hours ago
post-it - 3 hours ago
shivampkumar - 2 hours ago
post-it - 2 hours ago
kennyloginz - 4 hours ago
shivampkumar - 4 hours ago
villgax - 5 hours ago
shivampkumar - 3 hours ago
Reubend - 5 hours ago
refulgentis - 5 hours ago
shivampkumar - 3 hours ago
jmatthews - 3 hours ago
serf - 3 hours ago
shivampkumar - 3 hours ago
hank808 - 4 hours ago
shivampkumar - 3 hours ago
svnt - 2 hours ago