Someone bought 30 WordPress plugins and planted a backdoor in all of them
anchor.host980 points by speckx 17 hours ago
980 points by speckx 17 hours ago
This is a perfect illustration of what cracks me up about the hyperbolic reactions to Mythos. Yes, increased automation of cutting-edge vulnerability discovery will shake things up a bit. No, it's nowhere near the top of what should be keeping you awake at night if you're working in infosec.
We've built our existing tech stacks and corporate governance structures for a different era. If you want to credit one specific development for making things dramatically worse, it's cryptocurrencies, not AI. They've turned the cottage industry of malicious hacking into a multi-billion-dollar enterprise that's attractive even to rogue nations such as North Korea. And with this much at stake, they can afford to simply buy your software dependencies, or to offer one of your employees some retirement money in exchange for making a "mistake".
We know how to write software with very few bugs (although we often choose not to). We have no good plan for keeping big enterprises secure in this reality. Autonomous LLM agents will be used by ransomware gangs and similar operations, but they don't need FreeBSD exploit-writing capabilities for that.
> We know how to write software with very few bugs (although we often choose not to)
Do we, really? Because a week doesn’t go by when I don’t run into bugs of some sort.
Be it in PrimeVue (even now the components occasionally have bugs, seems like they’re putting out new major versions but none are truly stable and bug free) or Vue (their SFC did not play nicely with complex TS types), or the greater npm ecosystem, or Spring Boot or Java in general, or Oracle drivers, or whatever unlucky thread pooling solution has to manage those Oracle connections, or kswapd acting up in RHEL compatible distros and eating CPU to a degree to freeze the whole system instead of just doing OOM kills, or Ansible failing to make systed service definitions be reloaded, or llama.cpp speculative decoding not working for no good reason, or Nvidia driver updates bringing the whole VM down after a restart, or Django having issues with MariaDB or just general weirdness around Celery and task management and a million different things.
No matter where I look, up and down the stack, across different OSes and tech stacks, there are bugs. If there is truly bug free code (or as close to that as possible) then it must be in planes or spacecraft, cause when it comes to the kind of development that I do, bug free code might as well be a myth. I don't think everyone made a choice like that - most are straight up unable to write code without bugs, often due to factors outside of their control.
> Do we, really?
Yes, or pretty close to it. What we don't know how to do (AFAIK) is do it at a cost that would be acceptable for most software. So yes, it mostly gets done for (components of) planes, spacecraft, medical devices, etc.
Totally agreed that most software is a morass of bugs. But giving examples of buggy software doesn't provide any information about whether we know how to make non-buggy software. It only provides information about whether we know how to make buggy software—spoiler alert: we do :)
There is a huge wetware problem too. Like if I can send you an email or other message that tricks you and gets you to send me $10k, what do I care if the industry is 100% effective at blocking RCE?
That software also often has bugs. It's usually a bit more likely that they are documented, though, and unlikely to cause a significant failure on their own.
building around bugs that you know exists but dont know where is also a part of it. Reliability in the face of bugs. The mere existence of bugs isn't enough to call the software buggy, if the outcome is reliable (e.g., a triple module redundancy).
For a silly example, see how Python programs have plenty of bugs, but they still (usually) don't allow for the kind of memory exploits that C programs give you.
You could say that Python is designed around preventing these memory bugs.
Then we can't do it. Cost is a requirement
Cost is a parameter subject to engineering tradeoffs, just like performance, feature sets, and implementation time.
Security and reliability are also parameters that exist on a sliding scale, the industry has simply chosen to slide the "cost" parameter all the way to one end of the spectrum. As a result, the number of bugs and hacks observed are far enough from the desired value of zero that it's clear the true requirements for those parameters cannot be honestly said to be zero.
> the number of bugs and hacks observed are far enough from the desired value of zero
Zero is not the desired number, particularly not when discussing "hacks". This may not matter in current situation, but there's a lot of "security maximalism" in the industry conversations today, and people seem to not realize that dragging the "security" slider all the way to the right means not just the costs becoming practically infinite, but also the functionality and utility of the product falling down to 0.
I know a lot of security researchers will disagree with this notion, but I personally think that security (& privacy, I'm going to refer to both as "security" for brevity here) are an overhead. I think that's why it needs to exist *and be discussed* as a sliding scale. I do find a lot of people in this space chase some ideal without a consideration for practicality.
Mind, I'm not talking about financial overhead for the company/developer(s), but rather an UX overhead for the user. It often increases friction and might even need education/training to even make use the software it's attached to. It's much like how body armor increases the weight one has to carry and decreases mobility, security has (conceptually) very similar tradeoffs (cognitive instead of physical overhead, and time/interactions/hoops instead of mobility). Likewise, sometimes one might pick a lighter Kevlar suit, whereas othertimes a ceramic plate is appropriate.
Now, body armor is still a very good idea if you're expecting to be engaged in a fight, but I think we can all agree that not everyone on the street in, say, a random village in Austria, needs to wear ceramic plates all the time.
The analogy does have its limits, of course ... for example, one issue with security (which firmly slides it towards erring on the safe side) as compared to warfare is that you generally know if someone shot at you and body armor saved you; with security (and, again, privacy), you often won't even know you needed it even if it helped you. And both share the trait that if you needed it and didn't have it, it's often too late.
Nevertheless, whether worth it or not (and to be clear, I think it's very worth it), I think it's important that people don't forget that this is not free. There's no free lunch --- security & privacy are no exception.
Ultimately, you can have a super-secure system with an explicit trust system that will be too much for most people to use daily; or something simpler (e.g. Signal) that sacrifices a few guarantees to make it easier to use ... but the lower barrier to entry ensuring more people have at least a baseline of security&privacy in their chats.
Both have value and both should exist, but we shouldn't pretend the latter is worthless because there are more secure systems out there.
The question was not if it was possible within price boundary X, but if it was possible at all. There is a difference, please don't confound possibility with feasibility.
Is having problematic features that causes problems also a requirement?
The answer to the above question will reveal if someone an engineer or a electrician/plumber/code monkey.
In virtually every other engineering discipline engineers have a very prominent seat at the table, and the opposite is only true in very corrupt situations.
Unlimited budget and unlimited people won't solve unlimited problems with perfection.
Even basic theorems of science are incorrect.
Also people keep insisting on using unsafe languages like C.
It depends on exactly what you are doing but there are many languages which are efficient to develop in if less efficient to execute like Java and Javascript and Python which are better in many respects and other languages which are less efficient to develop in but more efficient to run like Rust. So at the very least it is a trilemma and not a dilemma.
The language plays a role, but I think the best example of software with very few bugs is something like qmail and that's written in C. qmail did have bugs, but impressively few.
Write code that carefully however is really not something you just do, it would require a massive improvement of skills overall. The majority of developers simply aren't skilled enough to write something anywhere near the quality of qmail.
Most software also doesn't need to be that good, but then we need to be more careful with deployments. The fact that someone just installs Wordpress (which itself is pretty good in terms of quality) and starts installing plugins from un-trusted developers indicates that many still doesn't have a security mindset. You really should review the code you deploy, but I understand why many don't.
> if less efficient to execute like Java and Javascript and Python
One of these is not like the others...
Java (JVM) is extremely fast.
JVM is fast for certain use cases but not for all use cases. It loads slowly, takes a while to warm up, generally needs a lot of memory and the runtime is large and idiosyncratic. You don't see lots of shared libraries, terminal applications or embedded programs written in Java, even though they are all technically possible to do.
The JVM has been extremely fast for a long long time now. Even Javascript is really fast, and if you really need performance there’s also others in the same performance class like C#, Rust, Go.
Hot take, but: Performance hasn’t been a major factor in choosing C or C++ for almost two decades now.
I think this discussion distracts a bit from the main point.
The main point is that there are super widespread software systems in use that we know aren't secure, and we certainly could do better if we (as the industry, as customers, as vendors) really wanted.
A prime example is VPN appliances ("VPN concentrators") to enable remote access to internal company networks. These are pretty much by definition Internet-facing, security-critical appliances. And yet, all such products from big vendors (be they Fortinet, Cisco, Juniper, you name it) had a flood of really embarrassing, high-severity CVEs in the last few years.
That's because most of these products are actually from the 80s or 90s, with some web GUIs slapped on, often dredged through multiple company acquisitions and renames. If you asked a competent software architect to come up with a structure and development process that are much less prone to security bugs, they'd suggest something very different, more expensive to build, but also much more secure.
It's really a matter of incentives. Just imagine a world where purchasing decisions were made to optimize for actual security. Imagine a world where software vendors were much more liable for damage incurred by security incidents. If both came together, we'd spend more money on up-front development / purchase, and less on incident remediation.
> No matter where I look, up and down the stack, across different OSes and tech stacks, there are bugs.
I’m not sure I’d go quite as far as GP, but they did caveat that we often choose not to write software with few bugs. And empirically, that’s pretty true.
The software I’ve written for myself or where I’ve taken the time to do things better or rewrite parts I wasn’t happy with have had remarkably few bugs. I have critical software still running—unmodified—at former employers which hasn’t been touched in nearly a decade. Perhaps not totally bug-free, but close enough that they haven’t been noticed or mattered enough to bother pushing a fix and cutting a release.
Personally I think it’s clear we have the tools and capabilities to write software with one or two orders of magnitude fewer bugs than we choose to. If anything, my hope for AI-coded software development is that it drops the marginal cost difference between writing crap and writing good software, rebalancing the economic calculus in favor of quality for once.
> I’m not sure I’d go quite as far as GP, but they did caveat that we often choose not to write software with few bugs. And empirically, that’s pretty true.
Blame PMs for this. Delivering by some arbitrary date on a calendar means that something is getting shipped regardless of quality. Make it functional for 80% of use, then we'll fix the remaining bits in releases. However, that doesn't happen as the team is assigned new task because new tasks/features is what brings in new users, not fixing existing problems.
I don’t disagree but is the alternative unbounded dev where you write code until it’s perfect? That doesn’t sound like a better business outcome. The trade off can’t be “take as long as you want”
“The alternative is that nothing will ever get released because devs will take forever making it perfect” is a really lame take.
We have literally countless examples of software that devs have released entirely of their own volition when they felt it was ready.
If anything, in my experience, software that’s written a little slower and to a higher standard of quality is faster-releasing in the long (and medium) run. You’d be shocked at how productive your developers are when they aren’t task-switching every thirty minutes to put out fires, or when feature work isn’t constantly burdened by having to upend unrelated parts of the code due to hopelessly interwoven design.
I'm happy to be reoriented with examples. Please provide some? You said countless but mentioned none.
I'd say SQLite is one good example:
https://sqlite.org/chronology.html
Regular releases for over a quarter of a century now, and it's renowned for its reliability.
I think PMs fail to understand categories of change in terms of complexity because they focus on the user facing surface and deal in timelines. A change that brings in a big feature can be straightforward because it perfectly fits the existing landscape. A seemingly trivial change can have lot of complexities that are hard to predict in terms of timelines.
There is also the angle of asking for estimate without allocating time for estimation itself.
For lack of a better word, I think it should drive from "complexity". Hardness of estimate should be inversely proportional to the complexity. Adding field to a UI when it is also exposed via the API is generally low complexity so my estimate would likely hold. We can provide estimate for a major change but the estimate would be soft and subject to stretch and it is the role of the PM to communicate it accordingly to the stakeholders.
Some coding doesn't fit your schedule. If you've scheduled 2 weeks, but it takes 3, then it takes 3. Scheduling it to take 2 does nothing to actually make the coding faster.
3 sounds fine.
Then I ask: why not add a week to how long that thing will take, meaning it stretches two sprints (or whatever you call it).
Add upfront. Then if you get to hard convo where someone says “do it sooner” you say “not possible.”
The fundamental problem remains: it’s difficult to predict how long it will take to solve a series of puzzles. I worked in a dev group where we’d take the happy path estimate and double it… it didn’t help much. So often I’d think something would take me a week, so two walls was allotted, but I made a discovery in my first like hour/day whatever that reduced the dev time to like a couple days. Then, there were tasks that I thought I’d solve in a few days that took me weeks because I couldn’t foresee some series of problems to overcome. Taking a guess and adding time to it just shifts the endpoint of the guess. That didn’t help us much.
That's the point I am making, and the point of asking "what is the alternative"
Developers aren't alone in adhering to schedules. Many folks in many roles do it. All deal with missed deadlines, success, expectation management, etc. No one operates in magical no-timeline land unless they do not at all answer to anyone or any user. Not the predominant model, right?
So rather than just say "you can blame the PMs" I'd love to hear a realistic-to-business flow idea.
I am not saying I have the answers or a "take". I've both asked for and been asked for estimates and many times told people "I can't estimate that because I don't know what will happen along the way."
So, it's not just PMs. It's the whole system. Is there a real solution or are we pretending there might be? Honest inquiry.