AI is breaking two vulnerability cultures

jefftk.com

144 points by speckx 4 hours ago


tptacek - 2 hours ago

This has been a very long time coming and the crackup we're starting to see was predicted long before anyone knew what an LLM is.

The catalyst is the shift towards software transparency: both the radically increased adoption of open source and source-available software, and the radically improved capabilities of reversing and decompilation tools. It has been over a decade since any ordinary off-the-shelf closed-source software was meaningfully obscured from serious adversaries.

This has been playing out in slow motion ever since BinDiff: you can't patch software without disclosing vulnerabilities. We've been operating in a state of denial about this, because there was some domain expertise involved in becoming a practitioner for whom patches were transparently vulnerability disclosures. But AIs have vaporized the pretense.

It is now the case that any time something gets merged into mainline Linux, several different organizations are feeding the diffs through LLM prompts aggressively evaluating whether they fix a vulnerability and generating exploit guidance. That will be the case for most major open source projects (nginx, OpenSSL, Postgres, &c) sooner rather than later.

The norms of coordinated disclosure are not calibrated for this environment. They really haven't been for the last decade.

I'm weirdly comfortable with this, because I think coordinated disclosure norms have always been blinkered, based on the unquestioned premise that delaying disclosure for the operational convenience of system administrators is a good thing. There are reasons to question that premise! The delay also keeps information out of the hands of system operators who have options other than applying patches.

Animats - an hour ago

We have a huge problem.

The US is at war. Much of the world is at war at the cyber attack level right now. The US, the EU, most of the Middle East, Israel, Russia... Major services have been attacked and have gone down for days at a time - Ubuntu, Github, Let's Encrypt, Stryker. Entire hospital systems have had to partially shut down.

Now, in the middle of this, AI has made attacks much faster to generate. Faster than the defensive side can respond. Zero-day attacks used to be rare. Now they're normal.

It's going to get worse before it gets better. Maybe much worse.

rikafurude21 - 3 hours ago

This feels more like an old problem getting reframed as an AI problem.

people were already diffing kernel commits and figuring out which ones were security fixes long before llms. if a patch lands publicly, the race has basically already started.

also not sure shorter embargoes really help. the orgs that can patch in hours are already fine. everyone else still takes days or weeks.

if anything, cheaper exploit generation probably makes coordinated disclosure more important, not less.

Havoc - 17 minutes ago

The bugs are bugs description reads pretty insane to me personally but I know linux world has many people valueing principle of it over practical matters.

90d seems long too though.

Think ultimately the big AI houses will need to help the core internet infra guys. Running latest and greatest AI over stuff like nginx and friends makes sense for us all collectively I think

JumpCrisscross - 3 hours ago

> So many security fixes are coming out now that examining commits is much more attractive: the signal-to-noise ratio is higher

Why?

> Additionally, having AI evaluate each commit as it passes is increasingly cheap and effective

This is the key. With AI, the “people won't notice, with so many changes going past” assumption fails.

miki123211 - 2 hours ago

AI will shorten update windows dramatically. 2026 is the worst year to be thinking about dependency cooldowns, we need to think about dependency warmups instead.

Soon, there will be no such thing as a safe way to disclose a vulnerability in an open source project. Centralized SaaS will have a major security advantage here.

xiaoyu2006 - 3 hours ago

The quick test doesn't show a lot - by out straight asking if this is a security patch, it implies and guides AI to have output more probably to agree on this assumption. A confusion matrix is more useful. Nonetheless of course this is not a detailed ai capability testing blog.

FuriouslyAdrift - 2 hours ago

Reverse engineering vulnerabilities from patches is red team 101...

proteal - 2 hours ago

It sounds to me like the safe assumption with software is that no matter how solid your stack is, there are vulnerabilities, potentially catastrophic. A question to folks more experienced than me - if my business depends on software, and I know that my software is almost certainly exploitable, how do I posture my business in such a way as to minimize the impacts of exploits like these?

oytis - 2 hours ago

The old saying of Tony Hoare about no obvious bugs vs obviously no bugs holds in the age of LLMs more than ever

j2kun - 2 hours ago

> Luckily AI can speed up defenders as well as attackers here, allowing embargoes that would previously have been uselessly short.

This is an important facet of the problem space: security risks turning into an arms race for who wants to spend more tokens.

cryo32 - an hour ago

I must admit I'm rather enjoying this particular form of shit show, mostly because it was a predication I made in 2023 in the early days of LLMs. It wasn't really a problem related to LLMs but a glaring hole in the thinking of current computing which is the "frustratingly over-connected" and "over-trust" approach to everything. After reading Liu Cixin's "three body problem" and noting the Dark Forest, I applied that to risk vectors and came to the conclusion that our over-connected nature plus some form of acceleration plus some form of negative impact will fuck us big time.

Turns out it did.

Thus we should probably start treating our thinking model of computing as a Dark Forest, not a friendly community. That mitigates these risks to some degree.

Analemma_ - 3 hours ago

I'd argue it's actually breaking three vulnerability cultures. In addition to the two Jeff mentions, I think the culture of delaying upgrades and staying on stable versions for as long as possible is going to become increasingly untenable, if everything that's not latest can be trivially scanned and exploited. In the extreme I think there's a decent chance projects like Debian might have to radically overhaul or just shut down completely - the whole philosophy of slow and steady with old code just won't work.

There will be much wailing and gnashing of teeth around this, because a lot of tech types really resent having to update constantly, but I don't think people will have a choice. If you have a complicated stack where major or even minor version updates are a huge hassle, I'd start working now to try and clear out the cruft and grease those wheels.

papichulo2023 - 3 hours ago

Maybe it is about time for Linux to get a real CD/CI and start using AI extensively.

Not just for vulnerabilities, having a nice agents|skills|etc.md definitions would encourage new devs to contribute instead of dealing with an overworked maintener repeating the same thing for n time.