Waymo halts service during S.F. blackout after causing traffic jams
missionlocal.org315 points by rwoll 3 days ago
315 points by rwoll 3 days ago
I was driving across the east side of SF and hit a patch of lights that were out.
The Waymo's were just going really slow through the intersection. It seemed that the "light is out means 4-way stop" dynamic caused them to go into ultra-timid mode. And of course the human drivers did the typical slow and roll, with decent interleaving.
The result was that each Waymo took about 4x as long to get through the intersections. I saw one Waymo get bluffed out of its driving slot by cross traffic for perhaps 8 slots.
This was coupled with the fact that the Waymos seemed to all be following the same route. I saw a line of about a dozen trying to turn left, which is the trickiest thing to navigate.
And of course I saw one driver get pissed off and drive around a Waymo that was advancing slowly, with the predictable result that the Waymo stopped and lost three more slots through the intersection.
On normal days, Waymos are much better at the 4-way stops than they used to be a few years back, by which I mean they are no longer dangerously timid. The Zoox (Amazon) cars are more like the Waymos used to be.
I expect there will be some software tweaks that will improve this situation, both routing around self-induced congestion and reading and crossing streets with dead lights.
Note that I didn't see any actually dead Waymos as others have reported here. I believe this is an extreme failsafe mode, and perhaps related to just too much weirdness for the software to handle.
It would be interesting to see the internal post mortem.
Its either people complain that they go slow and are too careful, or they will video and complain about every small traffic infringement that they make. Humans never driver 100% within the law and no one really cares. The second a single one of those things steps out of line and its an uproar. they have to drive ultra conservatively. How long have people been complaining about that one cat.
>either people complain that they go slow and are too careful, or they will video and complain about every small traffic infringement that they make.
Is there a name for this (and related) effects? Obviously, in a group of several hundred thousand people, there will always be at least a few people that complain about something for the exact opposite reasons. That's not a signal of usefulness. I feel we need a name for the some-rando-has-an-opinion-that-gets-picked-up-and-amplified-by-"the algorithm" phenomena. And the more fringe/out-there, the more passionate that particular person is likely to be about this issue, when "most" people feel "eh" about the whole thing.
the fact that there's practically no visible regulatory response to autonomous/remote-controlled vehicles that violate traffic laws or put people/pets/property at risk is a big part of why i'm personally not okay with these vehicles being allowed to use public rights-of-way.
when a waymo can get a traffic ticket (commensurate with google's ability to pay, a la the new income-based speeding ticket pilot programs in LA and SF), and when corporate officers down to engineers bear responsibility for failures, i think a lot more people will stop seeing these encroachments onto our commons as a nuisance.
story time: i've literally had one of those god awful food delivery robots run straight into me on a sidewalk. once, one of them stopped in my way and would not move, so i physically moved it myself and it followed me to my apartment. i'm about to start cow-tipping them (gently, because i don't want a lawsuit alleging property damage, even though they're practically just abandoned tech scrap without a human operator nearby to take responsibility).
Failed pretty badly but no reported injuries or even accidents so not that badly.
And if you’re Waymo, it’s a short-term reputation hit but great experience to learn from and improve.
>Failed pretty badly but no reported injuries or even accidents so not that badly.
Just because no integer lives were wasted doesn't mean we can't sum the man hours and get a number greater than 1
Using that math it would be better if they were faster even if they killed somebody.
That's a repulsive argument... Just because some argument is logically sound doesn't mean it's rational or reasonable.
Also, when attempting that math, make sure you account for the buffer that everyone already builds into their life. No sense in double counting the extra 10m I'm angry in traffic, instead of angry sitting at home because I'm doom scrolling some media feed with that extra 10m I saved because the robotaxi was faster.
Your naive feel good attitude (and you're not alone in it, that crap permeates white collar western society) is exactly the problem and being all emotional about it only worsens your ability to reason about it.
Whenever we do something "good" at societal scale be it build ADA ramps or engage in international trade of consumer goods or in this case, have transportation infrastructure, there is always some tradeoff like this. We can either do the thing in a safer to life and limb manner, but that almost always has tradeoffs that make the thing less accessible or worse performing. We could have absurdly low maximum vehicle speeds, that would save lives, but the time and wealth (which are convertible to each other on some level) renders the tradeoff not worth it (to the public at large).
You can value a whole life loss higher than man hours. You can value a child more than the elderly. You can make all sorts of adjustments like that but they do not change the fundamental math of the problem.
> Your naive feel good attitude (and you're not alone in it, that crap permeates white collar western society) is exactly the problem and being all emotional about it only worsens your ability to reason about it.
It's not a feel good attitude. I'm only objecting to your shallow take arguing that the commoditization of human life is reasonable. (i.e. touch grass) Similar to how you're concerned, exclusively, with the numbers you think you can count. That attitude of dehumanization has never resulted in good things things for society and humanity. That's the trade off I'm suggesting is important to consider when trying to make up numbers as you are. I'm not arguing that an absurdly low max speed is better. I'm arguing that it's small minded to try to count like that.
> You can value a whole life loss higher than man hours. You can value a child more than the elderly. You can make all sorts of adjustments like that but they do not change the fundamental math of the problem.
I wouldn't make any adjustments like that. The value or importance that exists with a human life, the case example, being a person that cares for others, and is cared about by others. Can't be reduced into a value that's translatable to man hours. I'd trade hours with some people for minutes with others. Just because time is something you can quantify, and you like that you can count it. Doesn't make it more better or important.
To be clear, I'm not saying your math is wrong, I'm saying you're wrong to believe it applies. (in such a simplistic manner.) You can use the math to decide how you're going to make tradeoffs given known input values; how much can my city pay for safety equipment to protect people. But you can't make up some adjacent math and say, this car's design is wrong because it didn't kill the correct number of people... err I mean, the correct number of man hours.
Triggering some sort of extreme safety mode is considered failing now?
Anything other than "normalish" tends to be a failure in driving. I.e. stopping and throwing your hazards on when you're in the intersection isn't success just because there were even worse options to have picked. It's nice they were able to pull the fleet back and get the cars out of the roads during the problem though.
I think this was a failure. The gold standard should be that the if every human driver was replaced with an AI how well could the system function. This makes it look like things would be catastrophic. Thus, showing how humans continue to be much more versatile and capable than AI.
I suppose if you lower the standards for what you hope AI can accomplish it wouldn't be considered a failure.
If every human driver was replaced with AI, this situation would have been fine. All the self-driving cars would have respected the four way stop
But they're exclusively used in areas that allow both human and AI drivers, so this hardly seems relevant.
>to learn from and improve.
Okay, let's see if they actually do it this time.
Waymo has been quite good about responsibly learning and improving imo. I do hope and think they’ll learn from this.
Have they implemented a cat-friendly update since the incident a few months ago?
I had to look up what this was a reference to. Several months ago a cat ran underneath a Waymo and the vehicle's rear tire ran over it while pulling away from the curb. The NYT has a video [1] of the incident.
[1] https://www.nytimes.com/2025/12/05/us/waymo-kit-kat-san-fran...
I mean: I haven't implemented a cat-friendly update to my own driving, and it isn't clear to me how I would ever begin to attempt to do so.
I’d bet you already have a mode that would’ve prevented what happened to the cat. From NYT reporting on the actual incident:
A human driver, she believes, would have stopped and asked if everything was OK after seeing a concerned person kneeling in front of their car and peering underneath.
“I didn’t know if I should reach out and hit one of the cameras or scream,” she said of the perilous moment. “I sort of froze, honestly. It was disorienting that Waymo was pulling away with me so close to it.”
I watched the video and read the article. (I wish I didn't; I love cats. I've known some wonderful bodega cats myself.)
But I'll bet I already have a mode that makes me want to drive away from people I don't know who are acting weird around my car.
I mean: I've got options. I can fight, flee, or hang out and investigate.
But I'm human -- I'm going to make what ultimately turn out to be poor decisions sometimes. I will have this condition until the day I die, and there isn't a single thing I can do about it (except to choose to die sooner, I guess).
So to posit an example: I'm already behind the wheel of my fleeing-machine with an already-decided intent to leave. And a stranger nearby is being weird.
I've now got a decision to make. It may be a very important decision, or it may instead be a nearly-meaningless decision.
Again, I've got options. I may very well decide that fighting isn't a good plan, and that joining them in exploring whatever mystery or ailment they may perceive is also not a great idea, and thereby decide that fleeing is the best option.
This may be a poor choice. It may also be the very best choice.
I don't know everything, and I can't see everything, and I do not get to use a time machine to gain hindsight for how this decision will play out.
(But I might speculate that if I stopped to investigate every time I saw a nearby stranger act weird at night in neighborhoods with prominent security gates that I might have fewer days remaining than if I just left them to their own devices.)
That’s an interesting perspective. The way I’ve always approached it is that if someone is looking at my car weird, I should probably ask what’s up. I’ve honked over several cars to let them know their tire is flat, flagged down drivers in parking lots because some dumbass let a ton of nails fall off their work truck, etc. When it comes to cars, someone checking out my car in a “weird” way is a prompt to me to investigate, not flee.