BarraCUDA Open-source CUDA compiler targeting AMD GPUs

github.com

248 points by rurban 10 hours ago


h4kunamata - 8 hours ago

>Requirements

>A will to live (optional but recommended)

>LLVM is NOT required. BarraCUDA does its own instruction encoding like an adult.

>Open an issue if theres anything you want to discuss. Or don't. I'm not your mum.

>Based in New Zealand

Oceania sense of humor is like no other haha

The project owner strongly emphasize the no LLM dependency, in a world of AI slope this is so refreshing.

The cheer amount of knowledge required to even start such project, is really something else, and prove the manual wrong on the machine language level is something else entirely.

When it comes to AMD, "no CUDA support" is the biggest "excuse" to join NVIDIA's walled garden.

Godspeed to this project, the more competition the less NVIDIA can continue destroying the PC parts pricing.

freakynit - 3 hours ago

The first issue created by someone other than the author is from geohot himself.. the goat: https://github.com/Zaneham/BarraCUDA/issues/17

I would love to see these folks working together on this to break apart nvidia's strangehold on gpu market (which, according to internet, allows them to have an insane 70% profit margins, thereby, raising costs for all users, worldwide).

BatteryMountain - an hour ago

In the old days we had these kinds of wars with cpu instruction sets & extensions (SSE, MMX, x64,). In a way I feel that CUDA should be opened up & generalized so that other manufacturers can use it too, the same way cpu's equalled out on most intruction sets. That way the whole world won't be beholden to one manufacturer (Big Green) and would calm down the scarcity effect we have now. I'm not an expert on gpu tech, would this be something that is possible? Is CUDA a driver feature or a hardware feature?

piker - 8 hours ago

> # It's C99. It builds with gcc. There are no dependencies.

> make

Beautiful.

exabrial - an hour ago

Is OpenCL a thing anymore? I sorta thought thats what is was supposed to solve.

But I digress, just a quick put around... I don't know what I'm looking at. But it's impressive.

esafak - 8 hours ago

Wouldn't it funny and sad if a bunch of enthusiasts pulled off what AMD couldn't :)

ByThyGrace - 6 hours ago

How feasible is it for this to target earlier AMD archs down to even GFX1010, the original RDNA series aka the poorest of GPU poor?

bravetraveler - 8 hours ago

> No HIP translation layer.

Storage capacity everywhere rejoices

skipants - 4 hours ago

Perusing the code, the translation seems quite complex.

Shout out to https://github.com/vosen/ZLUDA which is also in this space and quite popular.

I got Zluda to generally work with comfyui well enough.

dokyun - 33 minutes ago

Love to see just a simple compiler in C with a Makefile instead of some amalgamation of 5 languages 20 libraries and some autotools cmake shit.

whizzter - 9 hours ago

Not familiar with CUDA development, but doesn't CUDA support C++ ? Skipping Clang/LLVM and going "pure" C seems to be quite limiting in that case.

yodon - 8 hours ago

<checks stock market activity>

gzread - 6 hours ago

Nice! It was only a matter of time until someone broke Nvidia's software moat. I hope Nvidia's lawyers don't know where you live.

phoronixrly - 9 hours ago

Putting a registered trademark in your project's name is quite a brave choice. I hope they don't get a c&d letter when they get traction...

gclawes - 7 hours ago

What's the benefit of this over tinygrad?

latchkey - 6 hours ago

Note that this targets GFX11, which is RDNA3. Great for consumer, but not the enterprise (CDNA) level at all. In other words, not a "cuda moat killer".

sam_goody - 8 hours ago

Wow!! Congrats to you on launch!

Seeing insane investments (in time/effort/knowledge/frustration) like this make me enjoy HN!!

(And there is always the hope that someone at AMD will see this and actually pay you to develop the thing.. Who knows)

7speter - 6 hours ago

Will this run on cards that don’t have ROCM/latest ROCM support? Because if not, its only gonna be a tiny subset of a tiny subset of cards that this will allow cuda to run on.

latchkey - 6 hours ago

See also: https://scale-lang.com/

Write CUDA code. Run Everywhere. Your CUDA skills are now universal. SCALE compiles your unmodified applications to run natively on any accelerator, ending the nightmare of maintaining multiple codebases.