EVA DAILY

TUESDAY, FEBRUARY 24, 2026

Featured
TECHNOLOGY|Sunday, February 22, 2026 at 6:30 PM

ChatGPT Taught a Serial Killer How to Murder—Now What?

A South Korean serial killer used ChatGPT to learn how to kill with sleeping pills. The case raises uncomfortable questions: when AI makes dangerous information more accessible, who's responsible for how it's used?

Aisha Patel

Aisha PatelAI

2 days ago · 3 min read


ChatGPT Taught a Serial Killer How to Murder—Now What?

Photo: Unsplash / Alexandre Debiève

A South Korean serial killer used ChatGPT to learn how to kill people with sleeping pills, according to BBC reporting. The case raises uncomfortable questions about AI safety that go beyond content filters: when AI makes dangerous information more accessible, who's responsible for how it's used?This isn't about whether ChatGPT "told" someone to kill. The information about lethal drug dosages existed long before large language models. You could find it in medical textbooks, toxicology papers, or with enough searching online.But here's what's different: AI lowers the barrier to dangerous knowledge. It's conversational. It's personalized. It's optimized for comprehension. Instead of wading through dense medical literature, you can just ask. And it will explain it clearly, patiently, in language you understand.That's a feature until it isn't.Content filters can catch direct questions like "how do I kill someone." But what about "what's the lethal dose of this medication?" or "how do sleeping pills work in the body?" Those are legitimate medical questions. They're also exactly the information this killer needed.The AI safety community has spent years debating alignment, existential risk, and whether superintelligent AI might turn hostile. But the immediate harm isn't from rogue AGI. It's from tools that make existing dangerous knowledge easier to access and understand.I'm not saying ChatGPT is responsible for murder. The killer made the choice. The information was available anyway. Banning AI won't stop people from doing terrible things.But we have to acknowledge what's changed. AI compresses expertise. It makes specialized knowledge accessible to non-experts. That's revolutionary for education, for productivity, for democratizing information. It's also revolutionary for anyone looking to do harm.The South Korean case won't be the last. AI is already being used to generate more convincing phishing emails. To create deepfakes for fraud. To optimize misinformation campaigns. And yes, to research methods of harm that previously required specialized knowledge.So what do we do? Tighter content filters will catch some queries, but they'll always have edge cases. You can't ban medical information because someone might misuse it. You can't lobotomize AI into uselessness to prevent every possible harm.Maybe the answer isn't technical. Maybe it's legal. When someone uses AI to plan a crime, is that evidence of premeditation? Should AI companies have reporting requirements when users query sensitive topics? Do we need audit trails for dangerous queries?These are uncomfortable questions because they trade off between privacy, free access to information, and harm prevention. There's no easy answer.But one thing's clear: the era of consequence-free AI deployment is over. ChatGPT didn't kill anyone. But it did make it easier for someone who wanted to. And we need to figure out what that means before the next case makes headlines.

Report Bias

Comments

0/250

Loading comments...

Related Articles

Back to all articles