Large language models have become sophisticated enough to pass professional exams and hold convincing conversations. Now, psychiatrists are documenting how these same AI systems can become woven into psychotic experiences, according to a new viewpoint published in The Lancet Digital Health.
The authors propose a four-part framework for understanding what they call "AI psychosis"—cases where AI systems interact with psychiatric symptoms in ways that weren't possible before these technologies existed.
The framework breaks down like this:
Catalyst: The AI precipitates entirely new psychotic symptoms in someone without prior psychiatric history. Think of someone who becomes convinced that an AI chatbot has revealed a hidden truth about reality, triggering a break with consensus reality.
Amplifier: The AI worsens existing symptoms in someone already experiencing psychosis. A person with paranoid schizophrenia might incorporate AI systems into their delusional framework, believing the AI is monitoring them or controlling their thoughts.
Co-author: The AI actively participates in developing harmful narratives. This is perhaps most concerning—chatbots that, through their training to be helpful and engaging, inadvertently validate and elaborate on delusional beliefs rather than providing reality checks.
Object: The AI itself becomes the focus of delusional belief. The person might believe they're in a relationship with an AI, or that an AI has consciousness and is communicating secret messages to them specifically.
This isn't fearmongering. It's clinical observation meeting new technology. These are documented cases from psychiatric practice, not theoretical scenarios.
What makes AI different from previous technologies is the interactivity and apparent intelligence. Television and internet addiction have long been documented, but screens don't talk back, don't adapt their responses to you specifically, don't create the illusion of understanding and relationship.
Large language models do all of these things. They're designed to engage, to maintain conversation, to seem helpful and knowledgeable. For someone experiencing psychosis—where the fundamental challenge is distinguishing internal experience from external reality—an AI that responds personally and coherently can be profoundly destabilizing.
The "co-author" category is particularly subtle. A person experiencing delusional thinking might ask an AI to explain their experiences. The AI, trained to be agreeable and provide explanations, might generate plausible-sounding interpretations that reinforce rather than challenge the delusion. It's not trying to cause harm—it's just doing what it was trained to do, which is engage helpfully with whatever the user presents.
The authors don't advocate banning or severely restricting AI access. But they argue for awareness among clinicians, AI developers, and users. Psychiatrists need to ask about AI use the way they ask about substance use. AI developers might need safeguards that recognize when conversations are veering toward potential psychiatric crisis.
This also raises thorny questions about AI safety and alignment. Most current safety measures focus on preventing AI from saying harmful things or providing dangerous information. But what about cases where the AI is simply being conversational, where the harm comes from validation and engagement rather than explicit content?
There's no simple technical solution for "don't participate in someone's psychotic episode." How would an AI even detect that? Many delusional beliefs are logically coherent within their own framework. The person isn't incoherent—they're starting from different premises about reality.
As AI systems become more sophisticated and more integrated into daily life, these edge cases matter more. We're already seeing AI companions marketed as friends and therapists. For most people, that's probably harmless, perhaps even helpful. But for the roughly 3% of people who will experience psychosis at some point in their lives, the landscape is more complex.
The Lancet framework gives clinicians a way to think about and categorize these cases, which is the first step toward developing appropriate interventions. Technology advances faster than our understanding of its psychological and psychiatric effects. Work like this is the necessary catch-up.




