Rocky Linux 10 Will Support RISC-V
rockylinux.org196 points by fork-bomber 10 months ago
196 points by fork-bomber 10 months ago
Red Hat announced RISC-V yesterday with RHEL 10. So this seems rather expected.
https://www.redhat.com/en/blog/red-hat-partners-with-sifive-...
Debian Trixie now in hard frozen, also has official support for RISC-V64 [1].
[1] What's new in Debian 13:
https://www.debian.org/releases/trixie/release-notes/whats-n...
I understand why people use RH and Rocky and even Oracle: the rpm wranglers. However its not for me.
My earliest mainstream distro was RH when they did it just for fun (pre IBM) and then I slid slightly sideways towards Mandrake. I started off with Yggdrassil.
I have to do jobs involving RH and co and its just a bit of a pain dealing with elderly stuff. Tomcat ... OK you can have one from 1863. There is a really good security back port effort but why on earth start off with a kernel that is using a walking stick.
Perhaps I am being unkind but for me the RH efforts are (probably) very stable and a bit old.
It's not the distro itself either. The users seem to have snags with updating it.
I (very generally) find that RH shops are the worst at [redacted]
Hi! I'm sorry this has been your experience. I'm one of the Red Hatters who's been working behind the scenes to get this over the finish line.
I do say my genuine thanks for your earnest expression. The version and ABI guarantee is not for everyone. At the same time some folks around these parts know that I'm "not an apologist for running an out of date kernel". I can assure you that everything shipped in the forthcoming P550 image is fresh. GCC 15. LLVM 19, etc. It's intended for development to get more software over the finish line for RISC-V.
Conflict of interest statement: I work for Red Hat (Formerly CoreOS), and I'm also the working group lead for the distro integration group within RISE (RISC-V Software Ecosystem).
> The version and ABI guarantee is not for everyone.
As an aside, that kABI guarantee only goes so far. I work in HPC/AI, and the out-of-tree drivers we use like MOFED and Lustre drivers would break with EVERY SINGLE RHEL minor update (like RHEL X.Y -> X.(Y+1) ). Using past form here because I haven't been using RHEL for this purpose for the past ~5 years, so maybe it has changed since although I doubt it.
In that boat now, weak-modules means you sometimes get lucky, and can reuse. However, since it's more effort to determine if a rebuild is needed than just slap the "build all the vendor kmods and slurm" button, we tend to build for each kernel. IIRC el8 added kernel symbol signature hashes as Provide/Requires, with automation to extract them at build time, so kmods got a lot easier to deal with.
Sorry, I was being imprecise. Rebuilds per se are no problem, as both MOFED and Lustre provide sources, and DKMS nicely automates rebuilding when installing a new kernel image. The actual problem is that RHEL minor releases would also break kernel internal API's, and thus building the kernel modules would fail.
Is anyone trying to get those drivers upstreamed?
Yes, and yes.
The kernel parts of MOFED are largely backports of the latest and greatest upstream kernel drivers to the various distro kernels their customers actually run. (The non-kernel parts of MOFED is mostly open source but does contain some proprietary special sauce on top, like IIRC SHARP support isn't available in FOSS.). The HPC community does tend to want to use the latest RDMA drivers as those are critical for at scale performance.
For Lustre, the client driver was upstreamed into staging, where it sat AFAIU largely unused for a few years until it was ripped out again. The problem was that Lustre developers didn't adopt an upstream-first development approach, and thus the in-kernel driver was basically a throw over the fence fork that nobody cared about. I think there is an effort to try again and hopefully adopt an upstream-first approach, remains to be seen whether it'll succeed.
For MOFED, why not just wholesale use a newer Linux kernel version?
Perhaps the cure is worse than the disease? There are several reasons to stay with a distro kernel:
- Lustre releases target distro kernels, upstream would likely break.
- Distro stays on top of CVE's etc. and provide updates when needed.
- HW likely certified for a few supported distros only, use anything else and you're on your own.
That, and if not possible one can try to get the used kABI symbols graylisted at Red Hat, to get informed when they change.
> Tomcat ... OK you can have one from 1863. There is a really good security back port effort but why on earth start off with a kernel that is using a walking stick.
Because old software is battle-tested and reliable. Moreover, upgrading software is ever a pain so it's best to minimize how often you have to do it. With a support policy of 10 years, you just can't beat RHEL (and derivatives) for stability.
Having had to use those kinds of machines often as a user, it is a total pain. For some reason, these enterprise distributions end up being used a lot on scientific and machine learning clusters. You have to deal with 5-10 year old bugs that are solved in every other distribution already and you have to jump through hoops to make modern software run.
For me it always felt like the system administrators externalizing the cost on the users and developers (which are the same in many cases).
Despite my dislike of enterprise Linux, Red Hat is doing a lot of awesome work all around the stack. IMO Fedora and the immutable distros are the real showcase of all the things they do.
You can run whatever you want in containers. You don't even need root permissions. Red Hat's podman can launch containers without the need for root privileges.
> Despite my dislike of enterprise Linux, Red Hat is doing a lot of awesome work all around the stack. IMO Fedora and the immutable distros are the real showcase of all the things they do.
Fedora today is what RHEL will be tomorrow. They quite literally freeze a Fedora release to use as a base for RHEL's next release. If you like Fedora today you're gonna like Fedora tomorrow.
it's still painful when you can't even use the os-provided version of git and have to install newer one with conda
If you can get Nix running on these ancient machines it'll bring you all up2date packages you want, you can create Nix profiles that you install in a pre-configured path so you can use the packages in systemd too if you fancy.
It's really really great, even if you don't use or plan to use NixOS (Nix was born long before NixOS).
RHEL's kernel — the actual base of the operating system with the largest effect on stability — is not old. It might have a version number from the middle of the last century, but there are so many massive backports in there that in a few years time after release it gets closer to the latest mainline than to its original version. Don't expect too much from it.
Yup, one example that I noticed recently, RedHat backported the eBPF subsystem from Linux 6.8 to their 5.14 kernel (in RHEL 9.5):
https://docs.redhat.com/en/documentation/red_hat_enterprise_...
I’m old. I used one of the original boxed RH distros. It was cool then. That was almost 30 years ago.
I know they give back to Linux, and I’m thankful for the enterprises that pay for it because of that.
It’s not a bad company, though it’s strange that you could be a great developer and lose your position there if your project gets cut, unless another team picks you up, from what I hear.
But when Linus created Linux, he didn’t do it for money, and RH just seems corporate to me like the Microsoft of Linux, way back before Microsoft had their own Linux. I want my Linux free-range not cooped.
They don’t do anything wrong. They just don’t give the vibe. Anyone asking for money for it doesn’t “get it” to me.
> But when Linus created Linux, he didn’t do it for money, and RH just seems corporate to me like the Microsoft of Linux, way back before Microsoft had their own Linux. I want my Linux free-range not cooped.
You seem to forget that Red Hat has funded a lot of the development of the Linux ecosystem. There would be essentially no modern linux environment without Red Hat.
I'm thankful to RedHat, every other "cornerstone project" seems to be funded by them. The one that crossed my mind now is the PipeWire audio server, it just solved Linux audio for realsies this time.
I wouldn't use their products for much though, too enterprisey. Their projects are great and I'm happy someone else buys their packages.
Except Linux only took off thanks to those that didn't want to pay for UNIX, and the UNIX vendors that wanted to cut down R&D costs from their own in-house UNIX clones, and were uncertain if BSD was still safe to use with the ongoing AT&T lawsuit.
Re the last part: USL vs BSDi was filed in 1992 and settled in 1994, long before any sizeable vendor paid attention to Linux. (Version 1.0 of the Linux kernel was released at about the same time that lawsuit was settled.) So you shouldn't use that argument as part of your rationale.
I should because perceptions take a very long time to change.
If you ask random dev on the street about .NET, there is an high probability they will answer it is Windows only and requires Visual Studio.
You do not believe that what happened from 1991 to 1995 explains anything about how we got here?
Red Hat was founded in 1993. When do you think they got the idea? When do you think companies like Red Hat decided to bet on Linux instead of BSD? Debian was founded in 1993 as well. When was that lawsuit settled again?
An awful lot of the Linux momentum that carries us to this very day appeared after the BSD lawsuit was filed and before it was settled.
What about the other “big and professional” competitor to Linux?
GNU HURD was started in 1990. The original plan was to base it off the BSD kernel. The Linux kernel appeared in 1991. BSD fell under legal threat in 1992. Debian appeared in 1993. RMS lost interest in HURD. None of these dates had much impact you don’t think?
It is not that they did not want to pay for UNIX. After all, they pay for RHEL.
They did not want to pay for big iron for sure, preferring commodity hardware. Even then though, many Linux boxes can get pretty expensive.
I think it is more about openness and control than it is about cost. Linux brings flexibility and freedom.
So does BSD of course. The timing of that lawsuit changed the world.
I’m old. I used one of the original boxed RH distros. It was cool then. That was almost 30 years ago.
Does anyone remember glint (graphical UI for RPM) that was part of Red Hat? Must have been Red Hat 4.x or thereabout.
Yes indeed. How about AwesomeWM? Not the one that exists now. The one from Red Hat 4.x or so.
Redhat made Linux palatable for enterprise though. Without enterprise adoption where would Linux be?
Yep, when you have thousands of different production apps, installed and running directly on Linux - not talking about containers or microservices here - you’ll have very little appetite to upgrade all of them to the latest and shiniest technologies every couple of years. Stability & compatibility with existing skillsets is more important.
Many companies never upgrade anything anyway. I once got a job at a startup only to discover they were running Ubuntu 16.04 and python 2.7, in 2022. The dependency situation was also bad. Basically, their stack was so old it had gone past tech debt and into bankruptcy.
You are stable
But the world around you does not wait for you, and keeps moving
Want it or not, you move with it
And thus, you are not stable
10y operating system is a joke
I have to confess that my early experiences with RedHat as a teenager and dealing with the nightmareish RPM dependencies soured me from the distribution. I went to Debian and then its many descendants and never looked back; APT seemed magical in comparison.
I assume they have a package manager that resolves dependencies well now? Is that what an RPM wrangler is?
This is a very outdated view. dnf runs circles around apt. Try it out, or at least find man pages on the ole 'net and see what it can do.
Probably the thing I like the most is transactional installation (or upgrades/downgrades/removals) of packages with proper structured history of all package operations (not just a bunch of log records which you have to parse yourself), and the ability to revert any of those transactions with a single command.
I had the same experience as the OP in the beginning of the century. I've built a lot of RPM packages back then and it was clear that system of dependencies built into RPM format itself (not apt or dnf, this is dpkg level in terms of Debian) was poorly thought out and clearly insufficient for any complex system.
I've also migrated to Debian and it felt like a huge step forward.
I'm on Arch now, BTW.
The equivalent of RPM on Debian is the .deb package format. The equivalent of apt is dbf (or yum before it or up2date before that).
Red Hat is just old enough to exist before package managers existed on Linux. It was not there at first on Debian either.
Slackware still has not package manager really.
Possibly I'm just more used to apt (though fedora was my first linux), but I've found apt has better interfaces for what I'm trying to do (e.g. querying the state of the system and ensuring consistency across machines), and I've not found an equivalent to aptitude for dnf.
Side-note, the other difference I've noticed is Debian (and presumably its derivatives) has better defaults (and debconf) for packages, so whereas stock config would work on Debian, on Rocky I have to change config files, install missing packages etc.
rpm dependencies has been a solved problem with yum (and now dnf) for about two decades.
Yum was borrowed from yellow dog Linux.
To be pedantic, yum was not from Yellow Dog, it is Yellow dog Updater Modified after all. It was a rewrite of the Yellow Dog Updater by people at Duke University. (Yellow Dog Linux was based on Red Hat.)
There was a lot of competition around package managers back then. For RPM, there were also urpmi, apt-rpm, etc.
Which in turn was based on RHEL/CentOS: https://en.wikipedia.org/wiki/Yellow_Dog_Linux
First impressions really matter. This is also why I went Debian. You shouldn't be getting marked down for saying it.
Many of us were running on 28.8 dial-up. Internet search was not even close to a solved problem. Compiling a new kernel was an overnight or even weekend process. Finding and manually downloading rpm dependencies was slow and hard. Same era when compiling a kernel went overnight or over the weekend. You didn't download an ISO you bought a CD or soon a DVD that you could booted off of.
Compare that to Debian's apt-get or Suse's yast/yast2 of the time, both just handled all that for you.
Debian and Suse were fun and fit perfectly into the Web 1.0 world; RedHat was corporate. SystemD was pushed by RedHat.
Compiling a new kernel was an overnight or even weekend process
One friend and I had a competition who could make the smallest kernel configuration still functional on their hardware. I remember that at some point we could build it in ten minutes or so. This was somewhere in the nineties, I was envious of his DX2-50.
Compare that to Debian's apt-get or Suse's yast/yast2 of the time, both just handled all that for you.
One of the really huge benefits of S.u.S.E. in Europe in the nineties was that you could buy it in nearly every book shop and it came with an installation/administration book and multiple CD-ROMs with pretty much all packages. Since many people did not have internet at all or at most dial-up, it gave you everything to have a complete system.
Yes, I remember that too. I about a 3 DVD set of Debian Sarge. 2 DVD with everything and the 3rd was about source packages.
You are mixing a lot of history there.
Red Hat had packages but not package management at first. However, the same is true of Debian. It depends when you used them.
Red Hat Linux branched into RHEL (corporate) and Fedora (community).
SuSE went down a similar road to Red Hat and has both OpenSUSE and SLE these days. Fedora is less corporate than OpenSUSE is.
Debian is still Debian but a bit more pragmatic and a bit less GNU these days (eg. non-free firmware).
Disclaimer: I'm very involved in the kernel part of this for $company.
The RHEL kernels themselves do see many improvements over time, the code that you'll see when the product goes end of life is considerably updated compared to the original version string that you see in the package name / uname -a. There are customer and partner feature requests, cve fixes and general bug fixes that go in almost every day.
The first problem of 'running old kernels' is exacerbated by the kernel version string not matching code reality.
The second probelm is many companies don't start moving to newer rhels when its out, they often stick to current -1, which is a bit of a problem because by the time they roll out a release, n-1 is likely entering its first stage of "maintenance" so fixes are more difficult to include. If you can think of a solution to this, I'm all ears.
The original reason behind not continually shipping newer kernel versions is to ensure stability by providing a stable whitelisted kABI that third party vendors can build on top of. This is not something that upstream and many OS vendors support, but with the "promise" of not breaking kabi, updates should happen smoothly without third party needing to update their drivers.
The kabi maintenance happens behind the scenes while ensuring that CVE fixes and new features are delivered during the relevant stage of the product lifecycle.
The kernel version is usually very close to the previous release, in the case of rhel10 its 6.13 and already with zero day fixes it has parts of newer code backported, tested, etc in the first errata release.
The security landscape is changing, maybe sometime Red Hat Business Unit may wake up and decide to ship a rolling better tested kernel (Red Hat DOES have an internal/tested https://cki-project.gitlab.io/kernel-ark/ which is functionally this ). Shipping this has the downside is that the third party vendors would not have the same KABI stability guarantees that RHEL currently provides, muddy the waters of rhels value and confuse people on which kernel they should be running.
I believe there are two customer types, ones who would love to see this, and get the newest features for their full lifecycle, and ones who would hate it, because the churn and change would be too much introducing risk and problems for them down the line.
Its hard, and likely impossible to keep everyone happy.
As I mentioned in another comment on this thread:
> As an aside, that kABI guarantee only goes so far. I work in HPC/AI, and the out-of-tree drivers we use like MOFED and Lustre drivers would break with EVERY SINGLE RHEL minor update (like RHEL X.Y -> X.(Y+1) ). Using past form here because I haven't been using RHEL for this purpose for the past ~5 years, so maybe it has changed since although I doubt it.
I'm not sure what the underlying problem here is, is the kABI guarantee worthless generally or is it just that MOFED and Lustre drivers need to use features not covered by some kind of "kABI stability guarantee"?
I work on Lustre development. Lustre uses a lot of kernel symbols not covered by the kABI stability guarantee and we already need to maintain configure checks for all of the other kernels (SuSe, Ubuntu, mainline, etc) that don't offer kABI anyway. So in my opinion, it's not worth the effort to adhere to kABI just for RHEL. Especially when RHEL derivatives might not offer the same guarantees. DKMS works well enough, especially for something open source like Lustre.
Honestly, I'm not sure who kABI is even designed for. All of the drivers I've interacted with the HPC space (NVIDIA, Lustre, vendor network drivers, etc.) don't seem to adhere to kABI. DKMS is far more standard. I'd be interested to know which vendors are making heavy use of it.
I believe kABI is becoming less and less valuable over time.
> Especially when RHEL derivatives might not offer the same guarantees.
They do not, as you likely have experienced.
> Honestly, I'm not sure who kABI is even designed for.
You make it work once, using the accepted kabi's and then not have to worry about updating your drivers (You likely know this).
Some customers and systems are very change adverse, almost any change is too much for them.
The problem is that some vendors don't participate or care about the KABI program. They have their reasons, maybe the cost is too high to maintain RHEL compat and upstream compatability, so they simply choose the one that is the least pain to adhere to when a customer requests a fix.
If companies talked to partner engineering about their kABI requirements, I think there would be a lot less breaking however I'm sure that i'm oversimplifying the reason that they cant or wont do this.
I completely understand that the work is non-trivial and that they have many environmental pressures that affect their choices. The KABI is the olive leaf, they can take it or not.
What if, more than a rolling kernel, we get a new kernel every two years or so?
Or maybe one in the middle of the (expected) lifetime of the major release ?
Just thinking out loud, but I acknowledge that maintaining a kernel version is no small task (probably takes a lot of engineering time)
> Or maybe one in the middle of the (expected) lifetime of the major release ?
This idea has been floated internally, thank you voicing this idea from the customers perspective.
It seems like the business unit believes the better idea is to just release another version. I'm not sold on more frequent releases, but I'm not in a decision maker in that area.
Thank you for listening!
I want to reiterate that this is not a super strong pain point for me.
Overall i still like RHEL very much!
How much of the RHEL kernels is stuff that isn't in Linux mainline or LTS?
Pretty much all of it is in mainline modulo the secure boot lockdown patches, which are downstream for all distributions because Linus fundamentally believes those patches do not make sense.
Linux longterm often is missing stuff the RHEL kernel has, because RHEL backports subsystems from mainline with features and hardware support.
I’d rather use redhat than Ubuntu. I was handed a machine the other week with Ubuntu 23.10 on it, OS supplied from a vendor with extensive customization. Apt was dead. Fuck that. At least RH doesn’t kill their repos.
I've got Ubuntu 22.04 lying around that still update because they are LTS. Ubuntu has a well publicised policy for releases and you will have obviously read them.
Try do-release-upgrade.
You also mention "OS supplied from a vendor with extensive customization. Apt was dead."
How on earth is that Ubuntu's problem?
Isn’t Ubuntu basically killing apt?
My Ubuntu became unusable because it kept insisting on installing a snap version of Firefox breaking a whole bunch of workflows.
I do want to try a RH based OS (maybe Fedora) so they don’t keep changing things on me, but just where I am in life right now I don’t have the time/energy to do so, so for now I’m relying on my Mac.
Hopefully I can try a new Linux distro in a few months, because I can’t figure it out yet, but something about macOS simply doesn’t work for me from a getting work done perspective.
I've heard many good things about Pop OS. It's like Ubuntu done right, and it does have an apt package for Firefox.
(I run Void myself, and stay merrily away from all these complications.)
In Ubuntu, it's also possible to ditch Firefox from the snap store and install it using apt-get. Not from Ubuntu's repo, but from the official Firefox Debian repository:
https://www.omgubuntu.co.uk/2022/04/how-to-install-firefox-d...
I know it's not the best but at least it can be done with little effort.
I can highly recommend it. Have been using it for a couple years or so now, haven't had any serious issues.
> It's like Ubuntu done right
But it is Ubuntu?
It's based on Ubuntu, but it's different enough: https://en.wikipedia.org/wiki/Pop!_OS
I have been using Fedora Sway as my desktop operating system for a couple years now and I am very happy. It’s definitely worth a try. I have access to flatpak when I need it for apps like steam but the system is still managed by rpm/dnf. There’s of course some SELinux pain but that’s typically my fault for holding it wrong. Overall very impressed.
I cannot update the OS per the contract.
It’s Ubuntu’s problem because they decide they’re smarter than their users and nuke their repos.
Fuck all of that.
It's well publicized that they don't maintain support for old, non-LTS distros. They literally delivered what they promised. Could have been avoided by using an LTS distro.
Fedora does the same. No corporate vendor supports 6 month cycle distros for more than a year. RHEL releases come super slowly, for example.
I didn’t have a say in the matter of OS choice, it doesn’t matter how well-publicized Ubuntu’s stance is, it’s wrong. I don’t care if it’s not an LTS, keep the fucking repos open and advertise you’re using an insecure OS. Let me, the user, make that choice. Don’t pretend I’m stupid and need some kind of benevolent dictator to make choices for me, or handicap me because they’re smarter than me. They’re not.
That’s exactly how it works. If you want to use an unsupported, insecure OS, you just have to opt into it.
You opt into it by changing your repositories to the https://old-releases.ubuntu.com archive mirror. You can install and use Ubuntu 6.10 if you want.
"Keeping the repos open" has a cost on their part. Servers aren't free. If you think you're smart then mirror your own repos.
Really? How much more can it cost to host their LTS and non LTS repos open at the same time?
C’mon, that’s such a weak argument I think you know it.
If there's no cost in time, effort or equipment then mirror it yourself. It's easy, right?
Or just use an LTS distro like literally every single other organization that depends on Ubuntu for their business SMH. Like, it's absurd to even think about...