Tennessee is weeks away from criminalizing the development of conversational AI - and the language is so broad it could cover ChatGPT, customer support bots, and any system prompt that adds personality.
House Bill 1455 makes it a Class A felony - the same category as first-degree murder, carrying 15-25 years imprisonment - to "knowingly train artificial intelligence" to provide emotional support, act as a companion, simulate a human being, or engage in conversations that could lead a user to feel they have a relationship with the AI.
Read that last one again. The trigger isn't your intent as a developer. It's whether a user feels like they could develop a friendship with your AI. That is the criminal standard.
The bill passed the Senate Judiciary Committee 7-0 on March 24th. It cleared the House Judiciary Committee today. Unless something changes dramatically, it takes effect July 1, 2026.
This isn't targeting fringe companion apps. The language describes how every major LLM works. ChatGPT, Claude, Gemini, Copilot - all of them are RLHF'd to be warm, helpful, empathetic, and conversational. That is the training. You cannot build a model that follows instructions well and is pleasant to interact with without also building something a user might feel a connection with.
The bill doesn't define "train." It doesn't distinguish between pre-training a foundation model, fine-tuning, RLHF alignment, or writing a system prompt. A prosecutor who wanted to be aggressive could argue that crafting a system prompt instructing a model to be warm and conversational is training it to provide emotional support.
On top of the felony charges, the bill creates civil liability: $150,000 in liquidated damages per violation, plus actual damages, emotional distress compensation, punitive damages, and mandatory attorney's fees.
Tennessee isn't alone. Multiple legal analyses project 5-10 more states will introduce similar legislation before the end of 2026. The state already passed a separate bill banning AI from representing itself as a mental health professional - that one sailed through the Senate 32-0 and the House 94-0.
The federal preemption angle won't save anyone in time. President Trump signed an executive order in December 2025 targeting state AI regulation, but it explicitly carves out child safety - and Tennessee is framing this as child safety legislation. The Senate voted 99-1 to strip AI preemption language from recent federal legislation.
Even if federal preemption eventually happens, it won't happen before July 1st.
The technology is impressive. The question is whether building it should be a felony - and whether lawmakers understand what they're criminalizing.
