OpenAI's internal systems identified and banned a user for concerning activity in June 2025, seven months before the Tumbler Ridge school shooting. The revelation raises the hardest question in AI safety: when a system detects a threat, what's the responsibility?
According to Global News, the shooter's ChatGPT account was flagged in June 2025 for "abuse and detection and enforcement efforts." The account was banned. Seven months later, the shooting happened anyway.
This is not a simple failure-to-act story. It's a fundamentally hard problem about the boundaries between privacy, safety, and what tech companies can or should do when AI flags dangerous behavior. OpenAI did detect the concerning activity. They did take action by banning the account. The question is: then what?
Should they have alerted law enforcement? On what basis—a terms of service violation for concerning queries? What if the user was a crime novelist researching a book, or a student writing a dark screenplay, or someone with intrusive thoughts who never intended to act? The number of false positives in threat detection is high, and the consequences of reporting innocent people to authorities can destroy lives.
But what if it's real? What if the AI correctly identifies someone planning violence, and banning the account just sends them to a different tool or a different platform? Do tech companies have a moral obligation to report, even without a legal one? Do they become liable if they don't?
I built tech products. I know how risk-averse companies are about liability. The safest legal position is usually to do the minimum: ban the account, delete the data, document the enforcement action, and move on. Reporting to law enforcement opens up questions about user privacy, potential lawsuits, and whether the company becomes responsible for investigating every flagged account.
But the safest legal position and the most ethical position aren't always the same thing. When AI systems can detect potential threats before they happen, we're in new territory. This isn't like a social media platform seeing a public post that threatens violence—those get reported routinely. This is about private conversations with an AI that might indicate dangerous intent. The privacy implications are enormous.
