Canonical/Ubuntu have been under DDoS for more than 15h
status.canonical.com61 points by jtlebigot 4 hours ago
61 points by jtlebigot 4 hours ago
In the UK they have this issue called "TV pickup" (https://en.wikipedia.org/wiki/TV_pickup). TV pickup is where everyone in the UK watching a popular TV show gets up to boil a high-powered tea kettle at the same time on an ad break. This causes a temporary surge in electricity demand and leads to real outages. It was a mystery at first but now is accounted for.
I suspect the global internet is facing a "agent pickup" problem where significant changes (e.g., releases of new frontier models or new package versions) puts unpredictable pressure on arbitrary infrastructure as millions of distributed agents act to address the change simultaneously.
We're at the stage where we blame AI for anything as a first reaction?
(Love the tv pickup story. I also thought of that, in other situations)
While the timing with the copy.fail patches mentioned by a few comments here seems suspicious indeed, I have seen this repeating over the last few weeks: packages.ubuntu.com was hardly reachable on some days, causing apt-get to take forever to update the system. They have been struggling hard recently, it seems. Best of luck to the people having to deal with this mess on a holiday!
Tinfoil hat mode: a competitor wants to exploit copy.fail on some ubuntu servers, and is DDoSing canonical so that they can't update and thus patch the vuln
Double tinfoil hat mode: an attacker learned of my plan to finally update my personal computer out of 20.04 today and is DDoSing canonical so I can't do that and I remain vulnerable to the backdoors they've found.
The plot thickens...
If you can access AF_ALG on a server you don't need to do shenanigans like that. It's much easier to just find another bug and exploit that one instead.
The copy.fail website is very silly, it is not a special bug. If anyone gets compromised by that vuln their node architecture was broken anyway, patching copy.fail doesn't help.
In what way is it "not a special bug"? It's a publicly known root access from RCE exploit. Those cannot be a dime a dozen. I'm sure it's especially interesting for any shared hosting services which might be affected, and could be delayed. I could find any places running containered services and exfiltrate secrets parallel services, no?
What constitutes "special" for you, out of curiosity? Something chaining with a hypervisor exploit?
I thought copy.fail is a privelage escalation exploit, become root from a regular user? Am I missing something?
How would "node architecture" make people vulnerable to this?
You have to have shell access to a victim first right? Or am I missing something?
Seems reasonable to assume it's something to do with the recently publicized exploits. More likely, this could be an extortion attempt by criminals rather than a competitor.
This seems to be pretty targeted, and with the services affected like livepatch and such this could indeed be an actor DDoSing to avoid patches rolling out for copy.fail
We are so broken as society ddos'n ubuntu is now a thing.
Noticed it because snap didn't work, snap has its own status page just fyi: https://status.snapcraft.io/
Frustrating because the Slack snap is broken so every day you have to downgrade it and I guess you can't without connectivity.
This might be the incentive I need to finally purge snap.
I like to imagine it's returning a 500 error response asking you to email rhonda@ubuntu.com