Gentoo Linux 2025 Review
gentoo.org333 points by akhuettel 2 days ago
333 points by akhuettel 2 days ago
Gentoo is the best! Once you get the hang of creating a bootable system and feel comfortable painting outside the lines, it feels like Linux from Scratch just without needing to manually build everything. I automated building system images with just podman (to build the rootfs) and qemu (test boot & write the rootfs, foreign arch emulation) and basically just build new system images once a week w/ CI for all my hardware + rsync to update. Probably one of the coolest things I’ve ever built, at this point I’m effectively building my own Linux distro from source and it’s all defined in Containerfiles! I have such affection for the Gentoo team for enabling this project, shocking to discover how little they operate on I’m definitely setting up a recurring donation.
I think it is a great learning opportunity, but after using Gentoo for a decade or so, I prefer Arch these days. So if you want to learn more about Linux and its ecosystems, go for it, do it for a few months or years.
That said, I haven't tried Gentoo with binaries from official repositories yet. Maybe that makes it less time-consuming to keep your system up to date.
Been happily and very successfully using the official binpkgs, it works really well, sometimes there's a slight delay for the binary versions of the source packages to appear in the repositories, but that's about it. I guess it's kind of running Arch, but with portage <3! And the occasional compilation because your use flags didn't really match the binaries
Did you document this somewhere? I'm interested to know more
Nah, first time I’ve mentioned it anywhere. Happy to answer questions, if there’s interest maybe this could be my reason for a first blog post.
I would encourage you to write about it as well. It seems interesting and unconventional.
I used to tinker a lot with my systems but as I gotten older and my time became more limited, I've abandoned a lot of it and now favor "getting things done". Though I still tinker a lot with my systems and have my workflow and system setup, it is no longer at the level of re-compiling the kernel with my specific optimization sort of thing, if that makes sense. I am now paid to "tinker" with my clients' systems but I stay away from the unconventional there, if I can.
I did reach a point where describing systems is useful at least as a way of documenting them. I keep on circling around nixos but haven't taken the plunge yet. It feels like containerfiles are an easier approach but they(at least docker does) sort of feel designed around describing application environments as opposed to full system environments. So your approach is intriguing.
> It feels like containerfiles are an easier approach but they(at least docker does) sort of feel designed around describing application environments as opposed to full system environments.
They absolutely are! I actually originally just wanted a base container image for running services on my hosts that a.) I could produce a full source code listing for and b.) have full visibility over the BoM, and realized I could just ‘FROM scratch’ & pull in gentoo’s stage3 to basically achieve that. That also happens to be the first thing you do in a new gentoo chroot, and I realized that pretty much every step in the gentoo install media that you run after (installing software, building the kernel, setting up users, etc) could also be run in the container. What are containers if not “portable executable chroots” after all? My first version of this build system was literally to then copy / on the container to a mounted disk I manually formatted. Writing to disk is actually the most unnatural part of this whole setup since no one really has a good solution for doing this without using the kernel; I used to format and mount devices directly in a privileged container but now I just boot a qemu VM in an unprivileged container and do it in an initramfs since I was already building those manually too. I found while iterating on this that all of the advantages you get from Containerfiles (portability, repeatability, caching, minimal host runtime, etc) naturally translated over to the OS builder project, and since I like deploying services as containers anyways there’s a high degree of reuse going on vs needing separate tools and paradigms everywhere.
I’ll definitely write it up and post it to HN at some point, trying to compact the whole project in just that blurb felt painful.
Not what was mentioned by parent but I've been working on an embedded Linux build system that uses rootfs from container images: https://makrocosm.github.io/makrocosm/
The example project uses Alpine base container images, but I'm using a Debian base container for something else I'm working on.
Honestly this is just sorta a Tuesday for an advanced Gentoo user? There are lots of ways to do this documented on the Gentoo wiki. Ask in IRC or on the Forum if you can't find it. "Catalyst" is the method used by the internal build systems to produce images, for instance https://wiki.gentoo.org/wiki/Catalyst.
After driving Gentoo for a while back in 2004, I decided I don't really want to wait compiling for everything.
For those that don't want to wait compiling for everything - https://www.calculate-linux.org/
It's still 100% pure Gentoo (and actually these days even vanilla Gentoo itself offers precompiled binaries) so you still can compile things in rare cases that binary isn't already compiled with use/config that you want.
That’s mostly why I build system images in CI; my slowest builds (qemu user mode emulation of aarch64 for e.g. raspberry pi boards) can take multiple days so I just declared myself a 1 week window between updates and then just pull in the changes via rsync. I even boot the images with qemu as part of the testing cycle. At some point I might try building and hosting prebuilt bins like gentoo does now, I don’t use those though because I explicitly want to build everything from source.
For me, the most underrated takeaway here is the state of RISC-V support.
While other distributions are struggling to bootstrap their package repositories for new ISAs and waiting for build farms to catch up, Gentoo's source based nature makes it architecture agnostic by definition. I applaud the risque team for having achieved parity with amd64 for the @system set. This proves that the meta-distribution model is the only scalable way to handle the explosion of hardware diversity we are seeing post 2025. If you are building an embedded platfrm or working on custom silicon, Gentoo is a top tier choice. You cross-compile the stage1 and portage handles the rest.
While I was always a sourced-base/personalized distribution personality type, this is also a big part of why I moved to Gentoo in early 2004 (for amd64, not Risc-V / other embedded per your example). While Pentium-IV's very deep pipelines and compiler flag sensitivities (and the name itself for the fastest Penguin) drove the for-speed perception of the compile-just-for-my-system style, it really plays well to all customization/configuation hacker mindsets.
That is a fantastic historical parallel. The early amd64 days were arguably Gentoo's killer app moment. While the binary distributions were wrestling with the logistical nightmare of splitting repositories and figuring out the /lib64 vs /lib standard, Gentoo users just changed their CHOST, bootstrapped and were running 64-bit native. You nailed the psychology of it, too. The speed marketing was always a bit of a red herring. The ability to say "I do not want LDAP support in my mail client" and have the package manager actually respect that is cool. It respects the user's intelligence rather than abstracting it away.
Since you've been on the ride since '04, I'm curious to hear your thoughts. How do you feel the maintenance burden compares today versus the GCC 3.x era? With the modern binhost fallback and the improvements in portage, I feel like we now spend less time fighting rebuild loops than back then? But I wonder if long time users feel the same.
> The ability to say "I do not want LDAP support in my mail client" and have the package manager actually respect that is cool.
I tried Gentoo around the time that OP started using it, and I also really liked that aspect of it. Most package managers really struggle with this, and when there is configuration, the default is usually "all features enabled". So, when you want to install, say, ffmpeg on Debian, it pulls in a tree of over 250 (!!) dependency packages. Even if you just wanted to use it once to convert a .mp4 container into .mkv.
> Since you've been on the ride since '04, I'm curious to hear your thoughts. How do you feel the maintenance burden compares today versus the GCC 3.x era? With the modern binhost fallback and the improvements in portage, I feel like we now spend less time fighting rebuild loops than back then? But I wonder if long time users feel the same.
I'm another one on it since the same era :)
In general stable has become _really_ stable, and unstable is still mostly usable without major hiccups. My maintenance burden is limited nowadays compared to 10y ago - pretty much running `emerge -uDN @world --quiet --keep-going` and fixing issues if any, maybe once a month I get package failures but I run a llvm+libcxx system and also package tests, so likely I get more issues than the average user on GCC.
For me these days it's not about the speed anymore of course, but really the customization options and the ability to build pretty much anything I need locally. I also really like the fact that ebuilds are basically bash scripts, and if I need to further customize or reproduce something I can literally copy-paste commands from the package manager in my local folder.
The project has successfully implemented a lot of by-default optimizations and best practices, and in general I feel the codebases for system packages have matured to the point where it's odd to run in internal compiler errors, weird dependency issues, whole-world rebuilds etc. From my point of view it also helped a lot that many compilers begun enforcing more modern and stricter C/C++ standards over time, and at the same time we got Github, CI workflows, better testing tools etc.
I run `emerge -e1 @world` maybe once a year just to shake out stuff lurking in the shadows (like stuff compiled with clang 19 vs clang 21), but it's really normally not needed anymore. The configuration stays pretty much untouched unless I want to enable a new USE for a new package I'm installing.
> so likely I get more issues than the average user on GCC.
its been years since I had a build failure, and I even accept several on ~amd64. (with gcc)
I am replying here as a kind of "better place to attach".
Anyway, to answer grandparent, I basically never had rebuild loops in 19 years.. just emerge -uU world every day or sometimes every week. I have been running the same base system since..let's see:
qlop -tvm|h1
2007-01-18T19:50:33 >>> x11-base/xorg-server-1.1.1-r4: 9m23s
I have never once had to rebuild the whole system from scratch in those 19 years. (I've just rsync'd the rootfs from machine to machine as I upgraded HW and gradually rebuilt because as many others here have said, for me it wasn't about "perf of everything" or some kind of reproducible system - "more customization + perf of some things".) The upgrade from monolithic X11 to split X11 was "fun", though. /sI do engage in all sorts of package.mask/per-package use/many global use. I have my own portage/local overlay for things where I disagree with upstream. I even have an automated system to "patch" my disagreements in. E.g, I control how fast I upgrade my LLVM junk so I do it on my own timeline. Mostly I use gcc. I control that, too. Any really slow individual build, basically.
If over the decades, they ever did anything that made it look like crazy amounts of rebuilds would happen, I'd tend to wait a few days/week or so and then figure something out. If some new dependency brings in a mountain of crap, I usually figure out how to block that.
To be fair it was not that difficult to set create a pure 64 bit binary distro and there were a few of them. The real issue was to figure out how to do mixed 32/64 bit and this is where the fight about /lib directories originated. In a pure 64 bit distro the only way to run 32 bit binaries was to create a chroot with a full 32 bit installation. It took a while before better solutions were agreed to. This was an era of Flash and Acrobat Reader - all proprietary and all 32 bit only so people really cared about 32 bit.
gcc 3.3 to 3.4 was a big thing, and could cause some issues if people didnt follow the upgrade procedures, and also many c++ codebases would need minor adjustments.. this has been much much less of a problem since.
Additionally gentoo has become way more strict with use flag dependencies, and it also checks if binaries are depending on old libs, and doesnt remove them when updating a package, such that the "app depends on old libstdc++" doesnt happen anymore. It then automatically removes the old when nothing needs it anymore
I have been running gentoo since before 04, continously, and things pretty much just work. I would be willing to put money that I spend less time "managing my OS" than most who run other systems such as osx, windows, debian etc. Sure, my cpu gets to compile a lot, but thats about it.
And yes, the "--omg-optimize" was never really the selling point, but rather the useflags, where theres complete control. Pretty much nothing else comes close, and it is why gentoo is awesome
Fedora and Debian have been shipping RISC-V versions of stable releases for a while. I don't think anyone is really struggling.
Embedded usually uses yocto or buildroot or whatever it’s called. Never seen anyone use gentoo.
I can speak for yocto being completely built from source and has a huge variety of BSPs, usually vendor-created.
All distributions are source based and bootstrapped from source. They default to binary packages by default (while offering source packages) whereas Gentoo defaults to source packages (but still has binary packages). There's literally no advantage to Gentoo here. What you're saying doesn't even make logical sense.
Other distros don't support Risc-V because nobody has taken the time to bother with it because the hardware base is almost nonexistent.
> The Gentoo Foundation took in $12,066 in fiscal year 2025 (ending 2025/06/30); the dominant part (over 80%) consists of individual cash donations from the community. On the SPI side, we received $8,471 in the same period as fiscal year 2025; also here, this is all from small individual cash donations.
It's crazy how projects this large and influential can get by on so little cash. Of course a lot of people are donating their very valuable labour to the project, but the ROI from Gentoo is incredible compared to what it costs to do anything in commercial software.
This is, in a way, why it's nice that we have companies like Red Hat, SUSE and so on. Even if you might not like their specific distros for one reason or another, they've found a way to make money in a way where they contribute back for everything they've received. Most companies don't do that.
Contribute back how and where? Definitely not to Gentoo if we look at the meagre numbers here.
Red Hat contributes to a broad spectrum of Linux packages, drivers, and of course the kernel itself [1].
One example is virtualization: the virtio stack is maintained by Red Hat (afaik). This is a huge driver behind the “democratization” of virtualization in general, allowing users and small companies to access performant virt without selling a kidney to VMware.
Also, Red Hat contributes to or maintains all of the components involved in OpenShift and OpenStack (one of which is virtio!).
Why should Red Hat be expected to contribute to Gentoo? A distro is funded by its own users. What distro directly contributes to another distro if it’s not a derivative or something?
Red Hat primarily contributes code to the kernel and various OSS projects, paid for by the clients on enterprise contracts. A paying client needs something and it gets done. Then the rest of us get to benefit by receiving the code for free. It’s a beautiful model.
If you look at lists of top contributors, Red Hat (along with the usual suspects in enterprise) are consistently at the top.
Presumably, contribute to the entire ecosystem in terms of package maintenance and other non-monetary forms.
As others mentioned, Red Hat (and SUSE) has been amazing for the overall Linux community. They give back far more than what the GPL requires them to. Nearly every one of their paid "enterprise" products has a completely free and open source version.
For example:
- Red Hat Identity Management -> FreeIPA (i.e. Active Directory for Linux)
- Red Hat Satellite -> The Foreman + Katello
- Ansible ... Ansible.
- Red Hat OpenShift -> OKD
- And more I'm not going to list.Okd was a mess when i tried to use it years ago. The documentation was just a 1:1 copy-paste of openshift docs despite significant differences in installation. It really wanted you to use OLM but the upstream operators like maestra (the istio based upstream of redhat service mesh) were often very out of date in the catalog to the point of being incompatible with the current version of okd. I raised the issue on GitHub and a redhat employee replied that they were not happy with the situation at the time but to keep asking to show there was interest. I switched to talos instead for a more vanilla k8s where i could actually get a service mesh installed.
Not really comparable to the experiences i have running keycloak where the upstream documentation is complete or freeipa where it’s identical to idm and you can just use the redhat docs. Those are both excellent pieces of software we are lucky to have.
Red Hat contributes a huge amount to the open source ecosystem. They're one of the biggest contributors to the Linux kernel (maybe the biggest).
https://insights.linuxfoundation.org/project/korg/contributo...
It looks like they're second to Intel, at least by LF's metric. That said driver code tends to be take up a lot of space compared to other areas. Just look at the mass of AMD template garbage here: https://github.com/torvalds/linux/tree/master/drivers/gpu/dr...
Intel has long been a big contributor--mostly driver stuff as I understand it. (Intel does a lot more software work than most people realize.) Samsung was pretty high on the list at one point as well. My grad school roommate (now mostly retired though he keeps his hand in) was in the top 10 individual list at one point--mostly for networking-related stuff.
Yes, that would be nice but when I look at their Grub src.rpm for instance, some of those patches would look original but came from Debian.
Back in the day when the boxes were on display in brick-and-mortar stores, SuSE was a great way to get up and running with Linux.
The OpenSUSE Tumbleweed installation on my desktop PC is nearing 2 years now and still rolling. It is a great and somewhat underrated distribution.
SuSE/openSuSE is innovating plenty of stuff which other distros find it worth to immitate, e.g. CachyOS and omarchy as Arch-derivatives felt that openSuSE-style btrfs snapshots were pretty cool.
It's a rock-solid distro, and if I had a use for enterprise support, I'd probably look into SLES as a pretty serious contender.
The breadth of what they're doing seems unparalleled, i.e. they have rolling release (Tumbleweed), delayed rolling release (Slowroll) which is pretty unique in and of itself, point release (Leap), and then both Tumbleweed and Leap are available in immutable form as well (MicroOS, and Leap Micro respectively), and all of the aforementioned with a broad choice of desktops or as server-focused minimal environments with an impressively small footprint without making unreasonable tradeoffs. ...if you multiply out all of those choices it gives you, it turns into quite a hairy ball of combinatorics, but they're doing a decent job supporting it all.
As far as graphical tools for system administration go, YaST is one of the most powerful and they are currently investing in properly replacing it, now that its 20-year history makes for an out-of-date appearance. I tried their new Agama installer just today, and was very pleased with the direction they're taking.
...so, not quite sure what you're getting at with your "Back in the day..." I, too, remember the days of going to a brick-and-mortar store to buy Linux as a box set, and it was between RedHat and SuSE. Since then, I think they've lost mindshare because other options became numerous and turned up the loudness, but I think they've been quiety doing a pretty decent job all this time and are still beloved by those who care to pay attention.
SUSE has a lot of ex-Red Hatters at high levels these days. Their CEO ran Asia-Pacific for a long time and North America commercial sales for a shorter period.
SUSE has always been pretty big in Europe but never was that prominent in North America except for IBM mainframes, which Red Hat chipped away at over time. (For a period, SUSE supported some mainframe features that Red Hat didn't--probably in part because some Red Hat engineering leadership was at least privately dismissive of the whole idea of running Linux on mainframes.)
Red Hat pushing for the disaster that is Wayland has set the Linux Desktop back decades.
It is the Microsoft of the Linux world.
I'm sorry but this is just completely disconnected from reality. Wayland is being successfully used every single day. Just because you don't like something doesn't mean it's inherently bad.
X11 is used successfully every day, by me. I will not use Wayland, or any other trash like systemd, pulseaudio, etc that RedHat and their ilk push.
And that's fine! No one cares! Keep using it!
No one cares? Then why is there a giant astroturfing campaign designed to make me out to be an antiquated old fuddy duddy?
Ever looked at the X11 source? It's pretty high quality. There is various ancient compatibility code that could easily be removed if desired, but who cares? It works fine. Wayland on the other hand has been a fucking disaster. Just like all the other crap RedHat pushes.
I was reading an old Phoronix thread the other day (from 2014) where someone said X11 is obsolete and everyone should switch to Wayland soon. LOL
Why is Wayland a disaster? Most of the Linux community is strongly in favor of it.
Red hat certainly burns a lot of money in service of horrifyingly bad people. It's nice we get good software out of it, but this is not a funding model to glorify. And of course american businesses not producing open source is the single most malignant force on the planet.
> Red hat certainly burns a lot of money in service of horrifyingly bad people.
Red Hat also has a nasty habit of pushing their decisions onto the other distributions; e.g.
- systemd
- pulseaudio (this one was more Fedora IIRC)
- Wayland
- Pipewire (which, to be fair, wasn't terrible by the time I tried it)
Pushing their decisions? This is comical.
I guess Debian, SUSE, Canonical, etc get that email from Red Hat just go along with it. We better make the switch, we don’t want our ::checks notes:: competitor made at us.
systemd and friends go around absorbing other projects by (poorly) implementing a replacement and then convincing the official project to give up.
I don’t know where they come from, but I try to avoid all in that list. To be fair, audio is a train wreck anyway.
Eh, pulseaudio got a lot better, and pipewire "just works" at this point (at least for me). Even Bluetooth audio works OOTB most of the time.
Maybe. The background of my comment: in the end of 90's I worked in a company doing professional audio in windows. We had multiple cards, with multiple inputs and outputs, different sampling frequencies, channels, bits per sample... The API was trivial. I learned it in 1 hour.
FF to last year, I was working with OpenGL (in linux), I thought "I will add sound" boy... I was smashed by the zoo of APIs, subsystems one on top of another, lousy documentation... Audio, which for me was WAY easier as video, suddenly was way more complicated. From the userland POV, last year I also wanted to make a kind of BT speaker with a raspeberry pi, and also was terrible experience.
So, I don't know... maybe I should give a try to pipewire, at the time I was done after fighting with alsa and pulseaudio, the first problem I killed it.
Pipewire rocks. Wayland it's half baked and a disaster on legacy systems. SystemD... openrc it's good enough, and it never fails at shutdown.
I don't know that Red Hat is a positive force. They seem to be on a crusade to make the Linux desktop incomprehensible to the casual user, which I suppose makes sense when their bread and butter depends on people paying them to fix stuff, instead of fixing it themselves.
You don’t know they are a positive force?
This, despite the fact that Rocky, Alma, Oracle Enterprise Linux, etc exist because of the hard work and money spent by Red Hat.
And what are those companies doing to fix this issue you claim Red Hat causes? Nothing. Because they like money, especially when all you have to do is rebuild and put your name on other people’s hard work.
And what exactly is incomprehensible? What exactly is it that they’re doing to the Linux desktop that make it so that people can’t fix their own problems? Isn’t the whole selling point of Rocky and Alma by most integrators is that it’s so easy you don’t need red hat to support it?
I think it's fair to say that Red Hat simply doesn't care about the desktop--at least beyond internal systems. You could argue the Fedora folks do to some degree but it's just not a priority and really isn't something that matters from a business perspective at all.
Can you name a company which does care about the linux desktop? Over the years i’m pretty sure redhat contributed a great deal to various desktop projects, can’t think of anyone who contributed more.
Well Red Hat did make a go at a supported enterprise desktop distro for a time and, as I wrote, Fedora--which Red Hat supports in a variety of ways for various purposes--is pretty much my default Linux distro.
So I'm not being critical. Yes, Red Hat employees do contribute to projects that are most relevant to the desktop even if doing so is not generally really the focus of their day jobs. And, no, other companies almost certainly haven't done more.
Off the top of my head System76 jumps to mind with their hardware and Pop!_OS.
Canonical. at least they used to, although not a fan of the recent (last ten years) Canonical.
Certainly, Ubuntu used to be friendlier to new would-be Linux desktop users for a variety of reasons. (And we could get into some controversial decisions/directions it's taken but I won't.) I'm sure lots of people still run Ubuntu although Canonical is less prominent these days. My impression is that Canonical was sort of a passion project of Mark Shuttleworth's and they're just a lot lower key at this point.
> Can you name a company which does care about the linux desktop?
To some extent Valve. They have to, since the Steam Deck's desktop experience depends on the "Linux desktop" being a good experience.
Fedora is probably the best out-of-the-box desktop experience. Red Hat does great things, even if the IBM acquisition has screwed things up.
I find systemd pleasant for scheduling and running services but enraging in how much it has taken over every other thing in an IMO subpar way.
It's not just systemd, though. You have to look at the whole picture, like the design of GNOME or how GTK is now basically a GNOMEy toolkit only (and if you dare point this out on reddit, ebassi may go ballistics). They kind of take more and more control over the ecosystem and singularize it for their own control. This is also why I see the "wayland is the future", in part, as means to leverage away even more control; the situation is not the same, as xorg-server is indeed mostly just in maintenance work by a few heroes such as Alanc, but wayland is primarily, IMO, a IBM Red Hat project. Lo and behold, GNOME was the first to mandate wayland and abandon xorg, just as it was the first to slap down systemd into the ecosystem too.
The usual semi conspiratorial nonsense. GNOME is only unusable to clickers that are uncomfortable with any UI other than what was perfected by windows 95. And Wayland? Really? Still yelling at that cloud?
I expect people will stop yelling about Wayland when it works as reliably as X, which is probably a decade away. I await your "works for me!" response.
It’s very fair you can say “X works for me” but everyone saying otherwise is in the wrong.
I don't get your point. People regularly complain that Wayland has lots of remaining issues and there are always tedious "you're wrong because it works perfectly for me!" replies, as if the fact that it works perfectly for some people means that it works perfectly for everyone.