Everyone talks about AI risk in the abstract. Here's what it looks like in practice: Angela Lipps, a grandmother from Tennessee, spent nearly six months behind bars because facial recognition software got it wrong.
Fargo police deployed facial recognition technology to identify suspects in a bank fraud investigation. The AI matched Lipps' face to the case. She was arrested, extradited between Tennessee and North Dakota, and incarcerated despite having no involvement in the crime. By the time records proved she was in Tennessee during the fraud and charges were dismissed, she had lost her home, car, and dog.
Let that sink in. An algorithmic error destroyed someone's entire life. Not theoretically - actually. Her house. Her car. Her pet. Gone. Because a computer was confident it had matched the right face.
Facial recognition defenders will point out that it's supposed to be used as an investigative lead, not conclusive evidence. But that's not how it works in practice. When an AI system flags someone, that flag carries authority. Law enforcement treats it as credible enough to arrest, prosecutors treat it as credible enough to charge, and the burden shifts to the accused to prove the machine wrong.
The math of false positives becomes exponentially worse when you're poor, elderly, or lack resources to fight back. Angela Lipps is exactly the kind of person least equipped to navigate a wrongful prosecution - which is probably why it took six months to clear her name rather than six days.
Here's what concerns me most: this isn't a bug, it's a feature. Facial recognition systems have documented accuracy problems, especially with women, elderly people, and people of color. Law enforcement knows this. They deploy them anyway. When the systems fail, the human cost is someone else's problem.
How many other Angela Lipps are out there? How many people are sitting in jail right now because an algorithm was confident enough to arrest them but not accurate enough to be right? We don't know, because there's no systematic tracking of AI-assisted arrests that turn out to be wrong.
