Innocent woman jailed after being misidentified using AI facial recognition
grandforksherald.com752 points by rectang 3 days ago
752 points by rectang 3 days ago
> According to the court documents, the Fargo detective working the case then looked at Lipps' social media accounts and Tennessee driver's license photo. In his charging document, the detective wrote that Lipps appeared to be the suspect based on facial features, body type and hairstyle and color.
> Once they were in hand, Fargo police met with him and Lipps at the Cass County jail on Dec. 19. She had already been in jail for more than five months. It was the first time police interviewed her.
How is this the fault of AI? It flagged a possible match. A live human detective confirmed it. And the criminal justice system, for reasons that have nothing to do with AI, let this woman sit in jail for 5 months before doing even interviewing her or doing any due diligence.
There's a reason why we don't let AI autonomously jail people. Instead of scapegoating an AI bogeyman, maybe we should look instead at the professional human-in-the-loop who shirked all responsibility, and a criminal justice system that thinks it is okay to jail people for 5 months before even starting to assess their guilt.
> How is this the fault of AI? It flagged a possible match. A live human detective confirmed it.
Because we're seeing the first instances of what reality looks like with AI in the hands of the average bear. Just like the excuse was "but the computer said it was correct," now we're just shifting to "but the AI said it was correct."
Don't underestimate how much authority and thinking people will delegate to machines. Not to mention the lengths they'll go to weasel out of taking responsibility for a screw up like this (saw another comment in this thread about the Chief of Police stepping down but it being framed as "retirement").
It's only recently some have come to terms with the fact that DNA evidence sometimes returns false positives. Society, and law enforcement, assumed that DNA was infallible. No one apparently wondered millions of people could be reduced to a tiny number of genetic markers apparently having no overlap.
Danish police had to redo 20.000 DNA tests with a larger set of markeres begin tested, because they jailed someone based solely on a DNA test and did consider that they might have gotten the wrong person, despite the DNA match. It's essentially a human hash collision.
Identification by AI is going to be the same, except worse, because it's frankly less scientific. Law enforcement, the judicial system and especially the public is simply to uninterested in learning the limitations of these types of systems. Even in the more civilized part of the world police would love to just have the computer tell them who to pick up and where.
There was a man arrested in Santa Clara county because his DNA was tracked to a murder scene by the paramedics that treated him before they were called to the scene of the murder. He only got away with it because the public defender realized that he was in the hospitals detox at the time of the murder.
Typically, the “it” in the phrase “got away with it” refers to an action that broke the rules.
“Got off” would be more appropriate
"got off" implies he was guilty but got away with it. I'd say "vindicated" or "absolved" fit the bill here.
DNA can also be completely faked https://www.technologyreview.com/2009/08/17/210960/how-dna-e...
not the first instance.
This was 2023 https://www.youtube.com/watch?v=lPUBXN2Fd_E&t=19s
A dude in the usa was arrested in a casino by police because the casino's facial recognition software said he had been trespassed before. He hadn't. I think there was height differences and eye colour difference. The police still arrested him, booked him. I think the prosecutors took it to trial.
I'm sorry but this is a piss-poor excuse. When I Claude code broken features, I'm responsible 100%.
Why are cops not treated the same way? OP is right, AI is totally irrelevant in this story.
If the point is "cops can't be trusted". Why do they have GUNS?! AI is the least of your problems.
I feel like I'm going crazy with this narrative.
> I feel like I'm going crazy with this narrative.
We're only getting warmed up. There are programmers on HN that will take the output of their favorite AI, paste it and run it. And we're supposed to be the ones that know better.
What do you think an ordinary person is going to do in the presence of something that they can not relate to anything else except for an oracle, assuming they know the term? You put anything in there and out pops this extremely polished looking document, something that looks better than whatever you would put together yourself with a bunch of information on it that contains all kinds of juicy language geared up to make you believe the payload. And it does that in a split second. It's absolutely magical to those in the know, let alone to those that are not.
They're going to fall for it, without a second thought.
And they're going to draw consequences from it that you thought could use a little skepticism. Too late now.
When you foster a culture of impunity and passing the buck, don't be surprised when they pass the buck to the inscrutable black box they bought.
You might even argue that's the purpose of the inscrutable black box.
The “I” in “AI” stands for “intelligence”. Cops are using AI facial recognition because it is being sold to them as being smarter and better than what they are currently capable of. Why are we then surprised that they aren’t second-guessing the technology?
AI facial recognition is smarter than what they are capable of. That's not the issue. It is much faster than a human, and state-of-the-art models make fewer errors than a human (though the types of errors are not the same).
The issue is that facial recognition is just not very reliable. Not for humans and not for machines. If you look at millions of people, some of them just look incredibly similar. Yet police apparently thought that was all the evidence they will ever need. A case so watertight there's no point in even talking to the suspect
So the sane solution here is just leaving unreliable stuff to humans and reliable to machines. Especially so when human wellbeing and freedom are at the stake.
To define the line between the two, calculate the percentage of cases when mainstream CPUs return anything but integer 4 after addition of integer 2 and integer 2, and use that as the threshold to define "reliable".
> The “I” in “AI” stands for “intelligence”
By that logic the “I” in Siri is 2x more intelligent.
Because they are supposed to possess minimum levels of intelligence found in homo sapiens, which includes not believing anything a salesperson says.
Also, their whole job is dealing with people who constantly lie to them.
There are two things occurring here.
Police get raises and recognition for closing cases. In general they don't care if you're guilty or not, that's someone else's problem. Same with the detective, same with the DA. The more cases they close they 'tougher they are on crime'.
The next thing occurring is
If you have a broken system whose injustice is checked only by the limitations of the human elements, and you start replacing those human elements and powerscaling them, you have an unlimited downside.
Some police departments seem to actively reject candidates that have higher scores on IQ tests. Not that I think IQ test scores and actual intelligence are related but it clearly shows their intended target candidate group.
https://abcnews.com/US/court-oks-barring-high-iqs-cops/story...
This came up a few weeks ago. I don't think it's true. This lawsuit from 26 years ago is the only example anybody has come up with. Among the problems with this claim:
* Nobody can find a police department that administers any kind of general cognitive test.
* There are large states with statewide written police aptitude tests that are imperfect but correlated to general cognitive ability, and maximizing scores on that test is the universal correct strategy.
* It's a luridly stupid policy and most municipalities aren't luridly stupid.
I think this happened like, once or twice, in one or two of the 20,000 police departments across the United States, many of which are like one goober and his sidekick (no offense to them; just, you live in gooberville, you're a goober), and now it's an Internet meme that police departments specifically hire for midwittery. Nah.
In different states, police use cognitive aptitude tests such as the Wonderlic -- https://jobdescriptionandresumeexamples.com/10-important-fac... -- https://www.practice4me.com/lst-police-exam/ -- these are not strictly 'IQ' tests, but they're very similar.
The Wonderlic might as well be an IQ test (I'm using the term "general cognitive test").
The LST isn't; it's a domain-specific occupational exam.
If you find a place that (1) uses the Wonderlic and (2) has recently (like, not all the way back in 2000) claimed there was a high-end cut-off for applicants, you'll have disproven my claim. I don't think giving general cognitive tests to prospective police officers is common; this is why there are things like the LST, the PELLETB, and the POST.
You're over-selling the minimum level of intelligence in homo sapiens.
What you're stating is your wishful thinking. Don't get me wrong. I'd also like what you say to be true. It very much is not. Quite the opposite, which is why salespeople "work".
The amount of AI bullshit Senior+ level developers just paste to me as truth is astonishing.
As soon as we start to see a pattern of shitty vibe-coded software actually harming people via defects etc. (see: therac-25), I would hope that the conversation is about structural change to mitigate risk in aggregate rather than just punitive consequences for the individual programmers who are "responsible". The latter would be a fantastically stupid response and would do little or nothing to reduce future harm.
all accountability need not be punitive, we can certainly talk about systemic guardrails. What I find disbelief in, is someone saying the Chief of Police saying "We are not going to talk about that today?" is not the biggest scandal, but the AI is.
"Among his accomplishments has been establishing the department’s Real Time Crime Center that leverages technology and data to support officers in responding more effectively to incidents," the city's release said. "Zibolski also prioritized officer wellness initiatives to strengthen mental health resources and resilience within the department. He reinstituted the Traffic Safety Team to focus on roadway safety and proactive enforcement, and ... played an active role in statewide discussions on various issues affecting law enforcement."
From the same article... He spearheaded a push to "leverage technology and data to support officers in responding more effectively to incidents", then that same technology mistakingly ruins a woman's life by passing along a hit to an officer who compared with her FB photos and said "sure, seems right".The technology seems highly relevant here. Plus, as we've seen in the software world, when a mandate comes from the top to use the shiny new magic AI tools as much as possible, the officer may have felt pressured to make arrests using the new system they paid a bunch of money for instead of second guessing whatever it spits out.
> someone saying the Chief of Police saying "We are not going to talk about that today?" is not the biggest scandal, but the AI is.
Who is this "someone"? OP's article and the discussion here are absolutely not neglecting the human factors and general institutional failure that made this possible. But it's also true that without these "AI" tools, it would never have happened.
Yea but this feels like when a Waymo ran over a cat, and a Human driver ran over a toddler and both got the same level of coverage in the media (actually the cat got more follow-up coverage). And I'm supposed to believe both issues are equally important.
No. That's gaslighting, and totally misplaced political activation.
What do you propose we do in the latter situation? The news isn't the value of the life that was (presumably lost). The news is the circumstances that made that loss possible. Human driver was maybe careless, or maybe didn't look. The child safety classes I took emphasized over and over again to look around your car and yard before backing your car out. This is a problem with a known solution that unfortunately still happens despite the best efforts to prevent it.
Waymo hitting a cat is obviously less tragic, but if it can hit a cat, what else can it hit? A toddler? A human? The wall of your kitchen? This is a problem that has no known solution; furthermore, it's a problem that the engineers at Waymo don't seem overly keen on solving quickly.
"This is a problem with a known solution that unfortunately still happens despite the best efforts to prevent it."
Great, let's just apply that logic to Waymo as well and call it a day (see how silly that sounds?). Waymo has engineers..so does the Department of transportation.