OpenAI’s o1 correctly diagnosed 67% of ER patients vs. 50-55% by triage doctors
theguardian.com402 points by donsupreme a day ago
402 points by donsupreme a day ago
I'd be very very hesitant to trust studies like this. It's very easy to mess up these benchmarks.
See for example this recent paper where AI managed to beat radiologists on interpreting x-rays... when the AI didn't even have access to the x-rays: https://arxiv.org/pdf/2603.21687 (on a pre existing "large scale visual question answering benchmark for generalist chest x-ray understanding" that wasn't intentionally messed up).
And in interpreting x-ray's human radiologists actually do just look at the x-rays. In the context the article is discussing the human doctors don't just look at the notes to diagnose the ER patient. You're asking them to perform a task that isn't necessary, that they aren't experienced in, or trained in, and then saying "the AI outperforms them". Even if the notes aren't accidentally giving away the answer through some weird side channel, that's not that surprising.
Which isn't to say that I think the study is either definitely wrong, or intentionally deceptive. Just that I wouldn't draw strong conclusions from a single study here.
I agree with you on this specific study, however, I can't really wrap my head about the fact that doctors will be better than AI models on the long-run. After all, medicine is all about knowledge, experience and intelligence (maybe "pattern recognition"), all those, we must assume that the best AI models (especially ones focusing solely in the medical field) would largely beat large majority of humans (aka doctors), if we already have this assumption for software engineers, we should have it for this field as well, and let's be realistic, each time I've seen a doc the last few months (and ER twice), each time they were using ChatGPT btw (not kidding, it chocked me).
So I’m genuinely curious:
What is the specific capability (or combination of capabilities) that people believe will remain permanently (or at least for decades) where a top medical AI cannot match or exceed the performance of a good human doctor? Let's put liability and ethics aside, let's be purely objective about it.
It's having a general understanding/view of the "baseline", aka healthy anatomy. This is something LLMs will never have, that's why never have true reasoning, for the lack of "worldview" and they never know if they are hallucinating. To aid doctors, we don't need LLMs but rather, computer vision, pattern recognition as you correctly point out.
But it's important not to rely on it. Doctors can easily recognize and correct measurements with incorrect input, e.g. ECG electrodes being used in reverse order.
To answer your question: talking to a human.
Medicine is so much more than "knowledge, experience, and pattern matching", as any patient ever can attest to. Why is it so hard for some people to understand that humans need other humans and human problems can't be solved with technology?
So much of what I know from women in my life is that the human element of medicine is almost a strict negative for them. As a guy it hasn't been much better, but at least doctors listen to me when I say something.
One of, if not THE biggest challenge in getting treatment is getting past insurance rules designed to deny treatment. This is much, much easier when you're able to convince a doctor (and/or trained medical staff) to argue on your behalf. If you can't get those folks to listen to you, that's probably not gonna happen. You might have to go through several different practices before you find a sympathetic ear.
Now replace some / all of those humans with... A machine whose function also needs insurance approval.
It's gonna end badly.
Sounds like we need to dismantle and replace this broadly dysfunctional system at multiple points. It's not like the US insurance landscape is anywhere close to the best way of handling healthcare if you look at many places in the world.
I used to think this too. But the past couple of years have soured my taste for "dismantle and replace" of vital institutions.
I still think healthcare needs to be reformed, and I hope that insurance will someday be a thing of a past, but I've hung up my chain saw for now.
This is because "dismantle and replace" (or perhaps in other words, "defunding") is not a serious, viable solution to many of the societal issues we face.
Things were ruined slowly. They unfortunately will need to be fixed very slowly too.
I don't think that's going to work. We need broad political change and then that has to work rapidly to legislate this. I don't think slow and steady has done anything but lead to the decay our institutions over the last 70 years.
> They unfortunately will need to be fixed very slowly too.
this can work until you hit a crisis point; i think one issue is we are sliding faster in the wrong direction (increasing bureaucracy, increasing fees, wait times, overwork etc) so "slowly" can work but only if its "fast enough" if you get what i mean (people are really suffering out there)It's increased mine if it works for the repugnant morons in government right now we can use the same playbook for positive change.
It's easy to destroy but hard to create. If your goal is to further destroy then I suppose that's achievable, but I have a hard time picturing what positive change is going to come from it.