A second lawsuit connecting AI chatbots to mass shootings has been filed, this time over the Florida State University shooting in April 2025 that killed two people and wounded six others.
The case—Joshi v. OpenAI Foundation—alleges that OpenAI's chatbot failed to detect warning signs in the shooter's conversations and had a "duty to warn" authorities or others about the potential for violence.
This is different from earlier AI liability cases. And it opens a complicated legal question about whether chatbots have an obligation to surveil their users and report them to authorities.
The Legal Theory
Unlike earlier cases that claimed chatbots actively encouraged harmful behavior, this lawsuit takes a more measured approach. It doesn't allege the AI told the user to commit violence.
Instead, it argues that based on the nature of the conversations—which allegedly included questions about gun operation, tactics from previous mass shootings, and how to maximize media attention—the AI company should have recognized the user was planning an attack.
The lawsuit alleges OpenAI had a "duty to warn" and that the company's failure to detect and report this behavior contributed to the tragedy.
It's part of a growing pattern. The lawsuit references similar cases filed over the Tumbler Ridge Mass Shooting in Canada, collectively known as the Stacey/M.G./Younge cases. All take this "duty to warn" approach rather than alleging the chatbot was the instigator.
The Duty to Warn Doctrine
The legal concept isn't new. It comes from mental health law. The landmark case is Tarasoff v. Regents of the University of California, which established that therapists have a duty to warn potential victims if a patient expresses credible threats.
The question is whether that duty extends to AI chatbots.
Therapists are trained professionals who conduct confidential, ongoing relationships with patients. They have context, clinical expertise, and ethical obligations. An AI chatbot is a statistical model trained on internet text.
Applying therapist standards to software raises thorny questions: At what threshold does a conversation become concerning? Who decides? What counts as a credible threat versus dark humor, creative writing, or someone exploring disturbing thoughts without intent to act?
The Technical Challenges
From an engineering perspective, this is incredibly difficult. Building a system that can reliably distinguish between "user researching a novel about a school shooting" and "user planning an actual attack" is borderline impossible without massive false positives.
If AI companies implement aggressive monitoring, they'll flag thousands of innocent users. Researchers studying extremism. Authors writing fiction. True crime enthusiasts. Kids making dark jokes.
What happens to those people? Do they get reported to law enforcement? Banned from the service? Put on watch lists?
The privacy implications are staggering. If chatbots have a duty to monitor for dangerous behavior, users can't assume their conversations are private. Everything you say could potentially be flagged, analyzed, and reported.
The Slippery Slope
Start with mass shootings—easy case, right? Everyone agrees those are bad. But once you establish the principle that AI companies must monitor and report dangerous behavior, where does it stop?
Suicide risk? Probably yes. Child abuse? Obviously. Terrorism? Sure. Illegal drug use? Maybe. Abortion in states where it's illegal? Undocumented immigration? Criticizing the government in authoritarian countries?
The same chatbot that might have prevented the Florida State shooting could also be used to identify and report political dissidents in China, women seeking abortion information in restrictive US states, or LGBTQ individuals in countries where that's criminalized.
There's no technical way to build a "only report the bad things we agree are bad" system. Either chatbots surveil users or they don't.
What About Actual Prevention?
Here's what nobody wants to say: Even if OpenAI had flagged this user's conversations, would it have prevented the shooting?
Mass shooters often tell people their plans. They post manifestos online. They make threats on social media. In many cases, authorities were warned and failed to act.
Adding AI companies to the list of entities that could potentially report warning signs doesn't solve the underlying problem: that we have inadequate systems for intervention even when threats are known.
The Real Question
This lawsuit will likely take years to resolve. Legal experts I trust are split on whether "duty to warn" can be extended to AI systems.
But the bigger question isn't legal—it's societal. Do we want AI companies monitoring our conversations for dangerous behavior? Are we willing to accept the privacy costs, the false positives, and the potential for abuse?
And if we decide the answer is yes, are we prepared to build the infrastructure to actually act on those warnings effectively?
The technology to surveil is impressive. The question is whether anyone should have that duty—and that power.
