Some __nonstring__ Turbulence
lwn.net133 points by jwilk 2 days ago
133 points by jwilk 2 days ago
Fedora stupidly uses beta compiler in new release, Torvalds blindly upgrades, makes breaking, unreviewed changes in kernel, then flames the maintainer who was working on cleanly updating the kernel for the not-yet-released compiler?
I admire Kees Cook's patience.
Exactly. As quoted in the article:
> you didn't coordinate with anyone. You didn't search lore for the warning strings, you didn't even check -next where you've now created merge conflicts. You put insufficiently tested patches into the tree at the last minute and cut an rc release that broke for everyone using GCC <15. You mercilessly flame maintainers for much much less.
Hypocrisy is an even worse trait than flaming people.
> Hypocrisy is an even worse trait than flaming people.
Eh I mean everyone's a hypocrite if you dig deep enough—we're all a big nest of contradictions internally. Recognition of this and accountability is paramount though. He could have simply owned his mistake and swallowed his pride and this wouldn't have been such an issue.
On the one hand, sure, fine. He has raked people for less. However this is just an RC. Further, how long has Linus been doing this?
I remember Maddox on xmission having a page explaining that while he may make a grammatical error from time to time, he has published literally hundreds of thousands of words, and the average email he receives contains 10% errors.
However, Linus is well-known for being abrasive, abusive, call it what you want. If you can't take it, don't foist it, Linus. Even if you've earned the right, IMO.
Nobody earns the right to be an asshole. That is nothing that can be earned.
Indeed. On the other hand, the right to show that you are an asshole is available to anyone, and it has become quite popular!
I'd say if you're doing truly-heroic solo efforts, then you can earn that. (But I can only think of fictional examples.) For team efforts like the Linux kernel, sure, no amount of individual contribution to that project grants you the right to belittle the other contributors.
This idea that if you've done great things, then you've earned the right to treat people poorly, needs to go away. It's toxic and gross, and we should expect and demand better of our heroes (and ourselves).
Fabrice Bellard as earned that right but somehow I don't think he is !
Fabrice Bellard's work is impressive, but I wouldn't call it heroic. I was thinking more like, the grumpy-guts who ensures the local homeless shelter is adequately stocked with food, clean bedding, and toiletries, day-in and day-out, even in the depths of winter. You're allowed to be vaguely misanthropic in your interpersonal relationships if you're doing something like that, at least in my book.
Again, the only non-fictional people I know who qualify, are actually really nice to people.
IMHO Cook is following good development practices.
You need to know what you support. If you are going to change, it must be planned somehow.
I find Torwalds reckless by changing his development environment before release. If he really needs that computer to release the kernel, it must be stable one. Even better: it should be a VM (hosted somewhere) or part of a CI-CD pipeline.
The real problem here was "-Werror", dogmatically fixing warnings, and using the position of privilege to push in last-minute commits without review.
Compilers will be updated, they will have new warnings, this has happened numerous times and will happen in the future. The linux kernel has always supported a wide range of compiler versions, from the very latest to 5+ years old.
I've ranted about "-Werror" in the past, but to try to keep it concise: it breaks builds that would and should otherwise work. It breaks older code with newer compiler and different-platform compiler. This is bad because then you can't, say, use the exact code specified/intended without modifications, or you can't test and compare different versions or different toolchains, etc. A good developer will absolutely not tolerate a deluge of warnings all the time, they will decide to fix the warnings to get a clean build, over a reasonable time with well-considered changes, rather than be forced to fix them immediately with brash disruptive code changes. And this is a perfect example why. New compiler fine, new warnings fine. Warnings are a useful feature, distinct from errors. "-Werror" is the real error.
With or without -Werror, you need your builds to be clean with the project's chosen compilers.
Linux decided, on a whim, that a pre-release of GCC 15 ought to suddenly be a compiler that the Linux project officially uses, and threw in some last-minute commits straight to main, which is insane. But even without -Werror, when the project decides to upgrade compiler versions, warnings must be silenced, either through disabling new warnings or through changing the source code. Warnings have value, and they only have value if they're not routinely ignored.
For the record, I agree that -Werror sucks. It's nice in CI, but it's terrible to have it enabled by default, as it means that your contributors will have their build broken just because they used a different compiler version than the ones which the project has decided to officially adopt. But I don't think it's the problem here. The problem here is Linus's sudden decision to upgrade to a pre-release version of GCC which has new warnings and commit "fixes" straight to main.
Sadly, I lost that battle with Torvalds. You can see me make some of those points on LKML.
This is my take-away as well. Many projects let warnings fester until they hit a volume where critical warnings are missed amidst all the noise. That isn't ideal, but seems to be the norm in many spaces (for instance the nodejs world where it's just pages and pages of warnings and deprecations and critical vulnerabilities and...).
But pushing breaking changes just to suppress some new warning should not be the alternative. Working to minimize warnings in a pragmatic way seems more tenable.
He releases rc every single week (ok, except before rc1 there's two weeks for merge window), there's no "off" time to upgrade anywhere.
Not that I approve the untested changes, I'd have used a different gcc temporarily (container or whatever), but, yeah, well...
I find it surprising that linus bases his development and release tools based on whatever's in the repositories at that time. Surely it is best practice to pin to a specified, fixed version and upgrade as necessary, so everyone is working with the same tools?
This is common best practice in many environments...
Linus surely knows this, but here he's just being hard headed.
People downloading and compiling the kernel will not be using a fixed version of GCC.
Why not specify one?
That can work, but it can also bring quite a few issues. Mozilla effectively does this; their build process downloads the build toolchain, including a specific clang version, during bootstrap, i.e., setting up the build environment.
This is super nice in theory, but it gets murky if you veer off the "I'm building current mainline Firefox path". For example, I'm a maintainer of a Firefox fork that often lags a few versions behind. It has substantial changes, and we are only two guys doing the major work, so keeping up with current changes is not feasible. However, this is a research/security testing-focused project, so this is generally okay.
However, coming back to the build issue, apparently, it's costly to host all those buildchain archives. So they get frequently deleted from the remote repository, which leads to the build only working on machines that downloaded the toolchain earlier (i.e., not Github action runner, for example).
Given that there are many more downstream users of effectively a ton of kernel versions, this quickly gets fairly expensive and takes up a ton of effort unless you pin it to some old version and rarely change it.
So, as someone wanting to mess around with open source projects, their supporting more than 1 specific compiler version is actually quite nice.
Conceptually it's no different than any other build dependency. It is not expensive to host many versions. $1 is enough to store over 1000 compiler versions which would be overkill for the needs of the kernel.
What would that help? People use the compilers in their distros, regardless of what's documented as a supported version in some readme.
Because then, if something that is expected to compile doesn't compile correctly, you know that you should check your compiler version. It is the exact same reason why you don't just specify which library your project depends on but also the libraries' version.
People are usually going to go through `make`, I don't see a reason that couldn't be instrumented to (by default) acquire an upstream GCC vs whatever forked garbage ends up in $PATH
This would result in many more disasters as system GCC and kernel GCC would quickly be out of sync causing all sorts of "unexpected fun".
Why would it go wrong, the ABI is stable and independent of compiler? You would hit issues with C++ but not C. I have certainly built kernels using different versions of GCC than what /lib stuff is compiled with, without issue.
You'd think that, but in effect kconfig/kbuild has many cases where they say "if the compiler supports flag X, use it" where X implies an ABI break. Per task stack protectors comes to mind.