AI chatbots have become a ubiquitous part of life. People turn to tools like ChatGPT, Claude, Gemini, and Copilot not just for help with emails, work, or code, but for relationship advice, emotional support, and even friendship or love.
But for a minority of users, these conversations appear to have disturbing effects. A growing number of reports suggest that extended chatbot use may trigger or amplify psychotic symptoms in some people. The fallout can be devastating and potentially lethal. Users have linked their breakdowns to lost jobs, fractured relationships, involuntary psychiatric holds, and even arrests and jail time. At least one support group has emerged for people who say their lives began to spiral after interacting with AI.
The phenomenon—sometimes colloquially called “ChatGPT psychosis” or “AI psychosis”—isn’t well understood. There’s no formal diagnosis, data are scarce, and no clear protocols for treatment exist. Psychiatrists and researchers say they’re flying blind as the medical world scrambles to catch up.
What is ‘ChatGPT psychosis’ or ‘AI psychosis’?
The terms aren’t formal ones, but they have emerged as shorthand for a concerning pattern: people developing delusions or distorted beliefs that appear to be triggered or reinforced by conversations with AI systems.
Psychosis may actually be a misnomer, says Dr. James MacCabe, a professor in the department of psychosis studies at King’s College London. The term usually refers to a cluster of symptoms—disordered thinking, hallucinations, and delusions—often seen in conditions like bipolar disorder and schizophrenia. But in these cases, “we’re talking about predominantly delusions, not the full gamut of psychosis.”
Read More: How to Deal With a Narcissist
The phenomenon seems to reflect familiar vulnerabilities in new contexts, not a new disorder, psychiatrists say. It’s closely tied to how chatbots communicate; by design, they mirror users’ language and validate their assumptions. This sycophancy is a known issue in the industry. While many people find it irritating, experts warn it can reinforce distorted thinking in people who are more vulnerable.
Who’s most at risk?
While most people can use chatbots without issue, experts say a small group of users may be especially vulnerable to delusional thinking after extended use. Some media reports of AI psychosis note that individuals had no prior mental health diagnoses, but clinicians caution that undetected or latent risk factors may still have been present.
“I don’t think using a chatbot itself is likely to induce psychosis if there’s no other genetic, social, or other risk factors at play,” says Dr. John Torous, a psychiatrist at the Beth Israel Deaconess Medical Center. “But people may not know they have this kind of risk.”
The clearest risks include a personal or family history of psychosis, or conditions like schizophrenia or bipolar disorder.
Read More: ChatGPT May Be Eroding Critical Thinking Skills, According to a New MIT Study
Those with personality traits that make them susceptible to fringe beliefs may also be at risk, says Dr. Ragy Girgis, a professor of clinical psychiatry at Columbia University. Such individuals may be socially awkward, struggle with emotional regulation, and have an overactive fantasy life, Girgis says.
Immersion matters, too. “Time seems to be the single biggest factor,” says Stanford psychiatrist Dr. Nina Vasan, who specializes in digital mental health. “It’s people spending hours every day talking to their chatbots.”
What people can do to stay safe
Chatbots aren’t inherently dangerous, but for some people, caution is warranted.
First, it’s important to understand what large language models (LLMs) are and what they’re not. “It sounds silly, but remember that LLMs are tools, not friends, no matter how good they may be at mimicking your tone and remembering your preferences,” says Hamilton Morrin, a neuropsychiatrist at King’s College London. He advises users to avoid oversharing or relying on them for emotional support.
Psychiatrists say the clearest advice during moments of crisis or emotional strain is simple: stop using the chatbot. Ending that bond can be surprisingly painful, like a breakup or even a bereavement, says Vasan. But stepping away can bring significant improvement, especially when users reconnect with real-world relationships and seek professional help.
Recognizing when use has become problematic isn’t always easy. “When people develop delusions, they don’t realize they’re delusions. They think it’s reality,” says MacCabe.
Read More: Are Personality Tests Actually Useful?
Friends and family also play a role. Loved ones should watch for changes in mood, sleep, or social behavior, including signs of detachment or withdrawal. “Increased obsessiveness with fringe ideologies” or “excessive time spent using any AI system” are red flags, Girgis says.
Dr. Thomas Pollak, a psychiatrist at King’s College London, says clinicians should be asking patients with a history of psychosis or related conditions about their use of AI tools, as part of relapse prevention. But those conversations are still rare. Some people in the field still dismiss the idea of AI psychosis as scaremongering, he says.
What AI companies should be doing
So far, the burden of caution has mostly fallen on users. Experts say that needs to change.
One key issue is the lack of formal data. Much of what we know about ChatGPT psychosis comes from anecdotal reports or media coverage. Experts widely agree that the scope, causes, and risk factors are still unclear. Without better data, it’s hard to measure the problem or design meaningful safeguards.
Many argue that waiting for perfect evidence is the wrong approach. “We know that AI companies are already working with bioethicists and cyber-security experts to minimize potential future risks,” says Morrin. “They should also be working with mental-health professionals and individuals with lived experience of mental illness.” At a minimum, companies could simulate conversations with vulnerable users and flag responses that might validate delusions, Morrin says.
Some companies are beginning to respond. In July, OpenAI said it has hired a clinical psychiatrist to help assess the mental-health impact of its tools, which include ChatGPT. The following month, the company acknowledged times its “model fell short in recognizing signs of delusion or emotional dependency.” It said it would start prompting users to take breaks during long sessions, develop tools to detect signs of distress, and tweak ChatGPT’s responses in “high-stakes personal decisions.”
Others argue that deeper changes are needed. Ricardo Twumasi, a lecturer in psychosis studies at King’s College London, suggests building safeguards directly into AI models before release. That could include real-time monitoring for distress or a “digital advance directive” allowing users to pre-set boundaries when they’re well.
Read More: How to Find a Therapist Who’s Right for You
Dr. Joe Pierre, a psychiatrist at the University of California, San Francisco, says companies should study who is being harmed and in what ways, and then design protections accordingly. That might mean nudging troubling conversations in a different direction or issuing something akin to a warning label.
Vasan adds that companies should routinely probe their systems for a wide range of mental-health risks, a process known as red-teaming. That means going beyond tests for self-harm and deliberately simulating interactions involving conditions like mania, psychosis, and OCD to assess how the models respond.
Formal regulation may be premature, experts say. But they stress that companies should still hold themselves to a higher standard.
Chatbots can reduce loneliness, support learning, and aid mental health. The potential is vast. But if harms aren’t taken as seriously as hopes, experts say, that potential could be lost.
“We learned from social media that ignoring mental-health harm leads to devastating public-health consequences,” Vasan says. “Society cannot repeat that mistake.”