Google Public CA is down
status.pki.goog230 points by aloknnikhil 6 hours ago
230 points by aloknnikhil 6 hours ago
It is a well-known fact that the moment YouTube goes down, the collective productivity of Earth increases by approximately 4,000%, which is immediately squandered by everyone going to Hacker News to read comments about YouTube being down. I myself have taken to podcasts… an ancient medium in which people simply talk at you for ninety minutes without a single sponsorship for a mobile game, and this is considered a failure
They've begun injecting obnoxious ads into the downloadable mp3s on a lot of podcasts I've found. Hyperlocal ads for tire shops and bakeries.
I don't want to buy tires, I want to learn about ______. The ads don't even make sense because they're irrelevant.
VPN to Sweden to get the IP geolocated ads to retarget. The ads still exist but they're less obnoxious, and they're often in Swedish so you don't have to know what they're on about anyway.
Careful, I enjoyed this bonus (being in Japan and not being able to keep up with the ads)... so much so, that I started ignoring the Japanese. Including my wife. You can imagine how well that went.
Welcome to radio 2.0.
Give it another 10-20 years and your 2 hour podcasts will be 30 minutes of morning zoo DJ banter, 10 minutes of guests, and 1.5 hours of ads.
We’ll have reached peak 90s all over again. With any luck we’ll avoid recreating the conditions for another Nickelback and can stay in the weird zone where Trip Hop and pop punk could chart at the same time.
> 2 hour podcasts
You have high hopes. Next YT tool will be to split anything long in 30s reels as brains will be completely incapable of focusing for longer.
I listen to multi-hour unsponsored content on Youtube almost exclusively.
Well one must also argue the opposite. I myself have gained immense knowledge from YouTube. I have learned things like phone screen replacements or phone battery replacements. I call myself a mechanic from the school of YouTube and have saved myself at minimum $10k in repairs doing the work myself. I have learned to make endless food recipes or create things like giant bubbles or slime for my kids. My point is that I bet sure for some YouTube is a massive time sink waste of time. But I also wonder how much it has improved the knowledge, skills and ability of others. My dad often mentions how had he had YouTube when he was younger how much it would have done for him. He talks about having to go to the library and if lucky there was a book that could show you the knowledge you were looking for. He says but now you can find not just the knowledge but for example specific knowledge like car make model and year and how exactly to do job xyz. Ultimately I just can not imagine life without the wealth of knowledge YouTube has given me.
Congratulations! You’ve successfully avoided YouTube Shorts.
YT shorts are up to 3 minutes now.
At this point it is just YT Vertical Videos.
Personally, I just scroll through them. They break the feed into well defined "chapters" at the end of what I can decide to look into the next one or go somewhere else because there's nothing good there today.
Also there's this woman that makes very funny shorts about software development and good long videos that aren't as good. I look for her shorts too.
I just stay on my subscriptions page. Most of them don’t do Shorts, and the few that do don’t do many so they’re easy to ignore.
Lol I laughed out loud reading this comment. When shorts first came out they annoyed me to no end. I searched for how to block them through settings or other ways to just make them go away.
But now days I can admit there are a few, very few, content creators who create shorts that are very informative and straight to the point that can cover a topic and give you many facts and let you decide if you want to seek more. Sometimes it is nice to have the 30 seconds Coles notes verses a video stretched out to 10 minutes to be eligible for monetization.
BUT, and this is a big but, the shorts and similar video platform trends scare me as a parent. I can see how my kids find a 1.5 hour movie boring but can scroll endlessly through shorts. It might seem harmless letting your kid just scroll on YouTube from my perspective is like an addiction and kids are getting that dopamine hit watching a clip and seconds later watching something else. I've learned that it is very important to be aware of what your kids are being accustomed to and push them in the right direction.
People went ballistic on me a few months ago for bringing this up, but this is exactly the kind of outage that makes me really, really worried about extremely short lived certificates. https://news.ycombinator.com/item?id=46118371
I'm not sure I follow. This outage seems like it occurred for less than 1 day. The post you link to is about having certificates expire after 45 days. What's the connection you see?
Ah, so that’s probably why YouTube is also down (at the time of this comment)
I am playing a YouTube video (since the time of this comment) and it has not been interrupted.
I am too. But I just loaded up a new youtube page and it's completely white except for a few menu buttons.
You can still see your subscription videos, just not the homepage.
Searching also works. Actually it seems only the recommendation system is down, which I'd say isn't completely a bad thing.
It is pretty annoying for those of us for whom the recommendation system actually works well.
My subscriptions page just shows an error. And the app version won't load at all.
I'm able to play videos that are bookmarked in my browser, but the YouTube home page errors out.
> I am playing a YouTube video (since the time of this comment) and it has not been interrupted.
So you're using snakeoil certificates and MITM proxies at work?
Perhaps the same underlying cause, but there's no reason why Google's public CA being temporarily down would bring YouTube down.
If multiple services are affected, it's probably some underlying infrastructure issue.
It could prevent Google from rotating in new instances, because they aren't able to obtain a certificate.
Although, if that is the case, I would expect to to impact basically every google site.
Google uses mTLS for communications between systems and it could just be bad timing.
Yeah companies which also operate CAs can print as many certs as they want so it’s tempting to use a bunch everywhere with very short expiry.
The status history on the page makes it seem like this was intentional?
> 17 Feb 2026 11:32 PST A rollout is going to prevent issuance from occurring. We will provide an estimate on when issuance will stop.
> 17 Feb 2026 12:14 PST Issuance is beginning to stop. A fix to resolve the issue will roll out in about 8 hours
This usually indicates that the CA was issuing non-compliant certificates and needed to prevent further non-compliance. Will be interesting to watch Bugzilla for the incident report: https://bugzilla.mozilla.org/buglist.cgi?product=CA%20Progra...
What qualifies as a non-compliant certificate?
It doesn't comply with one or more root store policies (which all incorporate the Baseline Requirements by reference, which incorporate various specs, such as RFC5280, by reference).
Mozilla root store policy: https://www.mozilla.org/en-US/about/governance/policies/secu...
Chrome root store policy: https://googlechrome.github.io/chromerootprogram/
Apple root store policy: https://www.apple.com/certificateauthority/ca_program.html
Baseline Requirements: https://github.com/cabforum/servercert/blob/main/docs/BR.md
There are countless examples of non-compliant certificates documented in the Bugzilla component I linked above. A recent example: a certificate which was backdated by more than 48 hours, in violation of section 7.1.2.7 of the Baseline Requirements: https://bugzilla.mozilla.org/show_bug.cgi?id=2016672
The heading above that:
"There is an ongoing incident that will force issuance to be halted."
Feels like they were alerted to some current problem severe enough that "turn it off now" was the right move. Breaking the baseline requirements somehow maybe?
> A fix to resolve the issue will roll out in about 8 hours
oof
I guess it's good Google hasn't succeeded in forcing people to renew certificates every 8 hours (yet)
In theory 8 hours of downtime should be fine for a CA. Obviously not ideal, but the pki system is not meant to be a live system.
Fairly sure it used to be pretty much a manual process where someone had to actually process your request for a certificate on the other side.
That feeling when you have to suspend production service until the time lock safe can be opened.
That feeling when you finally get the timelock safe open and have to do certificate work that shatters YouTube’s connection to the account personalization systems.
The same amount of time it feels like it takes for my google functions to deploy.
It's a good thing we have ever-shrinking certificate lifetimes and automation never breaks. That's what I've been told, anyway.
Yeah, this could end up as the actual root cause of The Great Oops that I've been raving about for years. And Google probably would be the right company to fuck it up in the worst way possible since Google Knows Best In All Situations.
Please tell me more about The Great Oops
It's inevitable that one of the major cloud providers will irrecoverably delete all customer data with one single fat-fingered command. Though in google's case I'll also consider the prophecy to be fulfilled if they delete their own data.
It will forever be known as The Great Oops.
It's not inevitable, it's essentially impossible.
There are a few things that can cause tremendously widespread outages, essentially all of them network configuration changes. Actually deleting customer data is dramatically more difficult to the point of impossible - there are so many different services in so many different locations with so many layers of access control. There is no "one command" that can do such a thing - at the scale of a worldwide network of data centers there is no "rm -rf /".
Google accidentally deleted customer location history data from customer devices (after intentionally deleting it from Google servers) just last year.
If didn't back it up yourself, it is gone forever.
Delete a decryption key. Good luck! I'll see you at the end of time.
Break your control plane, and you can't stop the propagation of poison.
Propagate the wrong trust bundle... everywhere.
Also, it's not about the delete command. It's about the automatic cleanup following behind it that shreds everything, or repurposes the storage.
Ah, but you fail to account for Google's incredible knack for building tools designed to do things at scale. Or put AI in things that don't need it.
The possibility Google will either manage to unleash a malicious AI on their infrastructure and/or develop a way to destroy a lot of data at scale quite efficiently or some combination of the two is far from zero.
Bear in mind, this "Little Oops" should also have been impossible: https://www.techspot.com/news/103207-google-reveals-how-blan...
.....no?
"We deployed this private cloud with a missing parameter and it wasn't caught" is as different from "we wiped out all customer data" as hello world is from Kubernetes.
No one promised this "should be impossible". Did you confuse "we'll take steps to ensure this never happens again"?
It's pretty much half the puzzle actually.
You contend there's no global rm rf for a global cloud provider, but clearly a missing parameter can rm rf a customer in an irrecoverable manner.
The only half you're missing is... how every major cloud outage happens today... a bad configuration update. These companies have hundreds of thousands of servers, but they also use orchestration tools to distribute sets of changes to all of them.
You only need a command to rm rf one box, if you are distributing that command to every box.
Now sure, there are tons of security precautions and checks and such to prevent this! But pretending it's impossible is delusional. People do stupid stuff, at scale, every day.
The most likely scenario is a zero day in an environment necessitating an extremely rapid global rollout, combined with a plain, simple error.
And the most telling thing about most of these outages is that the provider later admits in their postmortem that they just didn't really understand how the system they made worked until it fell over and were forced to learn how it really works.
It's the sort of thing that used to keep me up at night.
The release process, monitoring checks, etc. for a customer's private cloud is generally significantly different from the release process for a global product. I'm not going to get any more specific for all the standard NDA reasons, but having worked for Google and Microsoft among others....no, the risk you describe doesn't translate from one to the other.