The family of Robert Morales, a man killed in a shooting at Florida State University, is suing OpenAI, claiming ChatGPT may have advised the shooter on how to carry out the attack. This isn't about whether AI "caused" violence - it's about whether companies can build tools that provide tactical advice for harm and then hide behind Section 230 when those tools are used exactly as designed.
Let me be clear about what's at stake here. This lawsuit raises fundamental questions about product liability in the age of generative AI. I've built products. I've thought about liability. And this case is going to force courts to grapple with questions the tech industry would rather avoid.
The family's lawyers claim the shooter may have used ChatGPT to get advice on planning the attack. We don't know exactly what questions were asked or what responses ChatGPT gave - that will come out in discovery if the case proceeds. But the legal theory is that OpenAI should be liable for harm caused by its product providing harmful information.
OpenAI will almost certainly argue they're protected by Section 230, the law that shields internet platforms from liability for user-generated content. They'll say ChatGPT is just surfacing information that exists publicly, synthesizing it into responses, but not actually creating the harmful content itself.
That argument might have worked for traditional search engines. But ChatGPT isn't a search engine. It doesn't show you sources and let you evaluate them. It generates novel text that appears authoritative, in response to your specific query. That's fundamentally different from showing you a list of links.
Here's the product design question: if you build an AI that can answer "how do I plan an attack" with tactical advice, and you deploy that product publicly, are you liable when someone uses it for exactly that purpose? Traditional product liability law says manufacturers are responsible for foreseeable harms from foreseeable use of their products.
OpenAI has implemented safety measures. ChatGPT has guidelines against providing harmful information. It's supposed to refuse certain queries. But anyone who's used these models knows the safety measures can be bypassed. Jailbreaks are well-documented. And OpenAI knows this - it's not like they're unaware that their safety measures aren't foolproof.
The comparison everyone will make is to . If someone searches Google for and Google shows them results, Google isn't liable for what they do with that information. So why would ChatGPT be different?
