Significant raise of reports

lwn.net

313 points by stratos123 4 days ago


chromacity - 3 days ago

> people will finally understand that security bugs are bugs, and that the only sane way to stay safe is to periodically update, without focusing on "CVE-xxx"

Linux devs keep making that point, but I really don't understand why they expect the world to embrace that thinking. You don't need to care about the vast majority of software defects in Linux, save for the once-in-a-decade filesystem corruption bug. In fact, there is an incentive not to upgrade when things are working, because it takes effort to familiarize yourself with new features, decide what should be enabled and what should be disabled, etc. And while the Linux kernel takes compatibility seriously, most distros do not and introduce compatibility-breaking changes with regularity. Binary compatibility is non-existent. Source compatibility is a crapshoot.

In contrast, you absolutely need to care about security bugs that allow people to run code on your system. So of course people want to treat security bugs differently from everything else and prioritize them.

psychoslave - 3 days ago

>software that used to follow the "release-then-go-back-to-cave" model will have to change to start dealing with maintenance for real, or to just stop being proposed to the world as the ultimate-tool-for-this-and-that because every piece of software becomes a target.

Actually, some software are running the water-heater/heat-pump system in my basement. There is a small blue light screen, it keeps logs of consumed electricity/produced heat and can make small histograms. Of course there is a smart option to make it internet connected. The kind of functionality I’m glad it’s disabled by default and not enforced to be able to operate. If possible, I’ll never upgrade it. Release then go back to the cave has definitely its place in many actual physical product in the world.

I’ll deal with enough WTF software security in my daily job during my career. Sparing some cognitive load of whatever appliance being turned into a brick because the company that produced it or some script-kiddy-on-ai-steroid decided it was desirable to do so, that’s more time to do whatever other thing cosmos allows to explore.

glimshe - 3 days ago

The last paragraph is interesting: "Overall I think we're going to see a much higher quality of software, ironically around the same level than before 2000 when the net became usable by everyone to download fixes. When the software had to be pressed to CDs or written to millions of floppies, it had to survive an amazing quantity of tests that are mostly neglected nowadays since updates are easy to distribute."

Was software made before 2000 better? And, if so, was it because of better testing or lower complexity?

3form - 3 days ago

>people will finally understand that security bugs are bugs, and that the only sane way to stay safe is to periodically update, without focusing on "CVE-xxx"

The problem is that the very same tools, I expect, are behind the supply chain attacks that seem to be particularly notorious recently. No matter where you turn, there's an edge to cut you on that one.

Shank - 3 days ago

Important to note that this is a comment on this article: https://lwn.net/Articles/1065586/.

nayroclade - 3 days ago

> I don't know how long this pace will last. I suspect that bugs are reported faster than they are written, so we could in fact be purging a long backlog

Hopefully these same tools will also help catch security bugs at the point they're written. Maybe one day we'll reach a point where the discovery of new, live vulnerabilities is extremely rare?

sigbottle - 3 days ago

I'm actually curious about AI progress:

There's no way the AI is a priori understanding codebases with millions of LoC now. We've tried that already, it failed. What it is doing now is setting up its own extremely powerful test harnesses and getting the information and testing it efficiently.

Sure, its semantic search is already strong, but the real lesson that we've learned from 2025 is that tooling is way more powerful.

That's cool! I've always wanted to learn how kernel devs properly test stuff reliably but it seemed hard. As someone who's dabbled in kernel dev for his job. Like real variable hardware, and not just manual testing shit.

Honestly, AI has only helped me become a better SWE because no one else has the time or patience to teach me.

amiga386 - 3 days ago

This is "the bomber will always get through" mentality for the modern era. You will invent air defences. You will write fewer bugs. You will leave code that doesn't have bugs alone, so it gains no more bugs. You will build software that finds bugs as easily as you think "enemies" find bugs, and you'll run it before you release your code.

What's the saying? Given many eyes, all bugs are shallow? Well, here are some more eyes.

piinbinary - 3 days ago

I'd be very curious to know what class of vulnerability these tend to be (buffer overrun, use after free, misset execute permissions?), and if, armed with that knowledge, a deterministic tool could reliably find or prevent all such vulnerabilities. Can linters find these? Perhaps fuzzing? If code was written in a more modern language, is it sill likely that these bugs would have happened?

jcalvinowens - 3 days ago

> I suspect that bugs are reported faster than they are written, so we could in fact be purging a long backlog (and I hope so).

It's hard for me to imagine how this wouldn't be true. This isn't the "new normal", everyone is just running it into the ground and wringing every drop they can out of it right now.

It would be interesting to "backtest" how much higher the rate of vulnerability discovery would have been if all these new vulnerabilities were discovered in near real time as they were created, since that would be more predictive of the "new normal", in my opinion. I suspect it's not very significant: we're flushing a 20+ year backlog, and generally the rate at which vulnerabilities are created is lower today.

HAMSHAMA - 3 days ago

Probably related to this (genuinely interesting) talk given by an entropic researcher https://youtu.be/1sd26pWhfmg?si=j2AWyCfbNbOxU4MF

adverbly - 3 days ago

Anecdotally, I've been seeing a higher rate of CVEs tracked by a few dependabot projects.

Seems supported by this as well: https://www.first.org/blog/20260211-vulnerability-forecast-2...

Interesting that it's been higher than forecast since 2023. Personally I'd expect that trend to continue given that LLMs both increase bugs written as well as bugs discovered.

0x3f - 3 days ago

Why don't we just pagerank github contributors? Merged PRs approved by other quality contributors improves rank. New PRs tagged by a bot with the rank of the submitter. Add more scoring features (account age? employer?) as desired.

siruwastaken - 3 days ago

It's interesting to hear from people directly in the thick of it that these bug reports are apparently gaining value and are no longer just slop. Maybe there is hope for a world where AI helps create bug free software and doesn't just overload maintainers.

mirax - 3 days ago

This really comforts me :) I'm looking forward to a more secure and private IT future.

motbus3 - 3 days ago

The slapocalipse is here, but I would propose the idea that open source maintainer get free access to AI tools from these big companies, so at least they can aggregate the problems and have some level of automation of the process.

For me, this seems something that would make sense for all dev community to push for.

michelwague - 3 days ago

this is what i'm seeing on a micro scale. i pointed a code-davinci-002 model at my own repo and it found a subtle off-by-

ori_b - 3 days ago

Or we can stop putting everything on the internet as a vector for enforced enshittification.

tyre - 3 days ago

I wish they wouldn’t call it “AI slop” before acknowledging that most of the bugs are correct.

Let’s bring a bit of nuance between mindless drivel (e.g. LinkedIn influencing posts, spammed issues that are LLMs making mistakes) vs using LLMs to find/build useful things.

devcraft_ai - 3 days ago

[flagged]

michelwague43 - 3 days ago

[dead]

stratos123 - 4 days ago

"On the kernel security list we've seen a huge bump of reports. We were between 2 and 3 per week maybe two years ago, then reached probably 10 a week over the last year with the only difference being only AI slop, and now since the beginning of the year we're around 5-10 per day depending on the days (fridays and tuesdays seem the worst). Now most of these reports are correct, to the point that we had to bring in more maintainers to help us."

throwatdem12311 - 3 days ago

Reports being written faster than bugs being created? Better quality software than before the 2000s?

Oh my sweet summer child.

This is some seriously delusional cope from someone who drank the entire jug of kool-aid.

I’d love to be proven wrong but the current trajectory is pretty plain as day from current outcomes. Everything is getting worse, and everyone is getting overwhelmed and we are under attack even more and the attacks are getting substantially more sophisticated and the blast radius is much bigger.

themafia - 3 days ago

An AI enthusiast having a breathless and predictive position on the future of the technology? No way! It's almost like Wall Street is about to sour on the whole stack and there is a concerted effort to artificially push these views into the conversation to get people on board.

Then again, I'm a known crank and aggressive cynic, but you never really see any gathered data backing these points up.