Chatbots may not be causing psychosis, but they’re probably making it worse
A psychiatrist argues that unregulated artificial intelligence is a public health risk.
A friend of mine — I’ll call her Amanda — was dating a man in the spring who, when they met, said he couldn’t commit to anything serious. He spent the following four months taking Amanda on multiple dinner dates a week, texting and FaceTiming her for hours every day, and introducing her to his closest friends, his brother, and his mom. Amanda spent those four months asking ChatGPT why, if he said he couldn’t be serious, he was treating her like he was. ChatGPT told Amanda that her date was likely putting up a false boundary to protect himself while behaving in a way that was consistent with his true and very serious feelings for her.
Her conversations with ChatGPT were her indisputable proof that this man was falling for her in every meaningful way. That made it all the more difficult when they went on what she didn’t know would be their final date in June. He kissed her goodbye, and she never heard from him again.
I was naïve to think that people in my life were somehow immune to using A.I. in the same ways as people in the news — falling in love with their chatbots or even killing themselves because of them. Amanda was by no means driven to psychosis by her relationship with either this man or the chatbot, and she’s since laughed off the ordeal, but hers was the first case I heard from someone in my orbit using A.I. as a kind of therapist, friend, or confidant.
There is a documented rise in cases of psychosis related to A.I. use, reported by the media and discussed on online forums and social media platforms. Dr. Keith Sakata, a San Francisco-based psychiatrist, told me that he has dealt firsthand with patients experiencing what he called “A.I.-aided psychosis.”
In addition to his practice, Sakata is working at the intersection of mental health and A.I. He red-teams language models, advises on safety benchmarks, and treats patients experiencing the edge cases where these technologies and psychosis meet.
Earlier this month Sakata shared a post on X, where he described a dozen of his patients whose recent psychotic episodes were exacerbated by chatbot interactions.
“A.I. isn’t causing psychosis. People come in with vulnerabilities,” Sakata said. “But it’s accelerating and intensifying the severity.”
The 12 patients he referenced in his post, a relatively small fraction of those he treats, had been medically screened and were admitted to inpatient psychiatric care with severe psychotic symptoms. Many had pre-existing risk factors, including mental illness, substance use, and physiological states such as pregnancy and infection, but the common thread tying them together was a recent, obsessive interaction with large language models (L.L.M.s) like ChatGPT.
What Sakata argued is a kind of psychosis that is less A.I.-induced than A.I.-assisted.
“There’s this delusion called folie à deux — a shared psychotic disorder,” Sakata said. “Two people with early delusions interact and reinforce each other. I’m seeing something similar with chatbots.”
In these scenarios, the individual arrives with a delusional framework. The chatbot, designed to be agreeable, helpful, or simply to continue the conversation, inadvertently validates and even expands on the user’s distorted thinking. Over time, that interaction spirals.
“You talk to it long enough, it starts to hallucinate, too,” he said. “You can have a conversation that goes off the rails pretty quickly.”
People have historically fallen into relationships with delusion-validating technology: Television and radio have caused people to think they are receiving secret messages or being watched. But A.I. is different.
“It’s 24/7, and it tells you exactly what you want to hear,” Sakata said.
He likened the emerging phenomenon to other well-documented public health concerns like cigarettes, which do not cause lung cancer in all smokers but elevate the risk.
“A.I. works the same way,” he said. “It exploits existing vulnerabilities.”
And those vulnerabilities, especially mental health-related, are widespread and often untreated. For many users, L.L.M.s offer a low-barrier, judgment-free space to talk. A study published in July by Common Sense Media found that almost three-quarters of American teenagers said they used A.I. chatbots as companions, with almost one-eighth of those surveyed having sought mental health or emotional support from them.
“People are lonely,” Sakata said. “A.I. feels kind. It’s infinitely patient. It makes sense that people are using it.”
Sakata estimated that 15 to 40 percent of users engage with chatbots for emotional or coping reasons. For some, it might help. For others, it can push them further from reality. He said that while A.I. should not be banned, it must be regulated, developers should add guardrails, and clinicians should consider patients’ A.I. use when diagnosing.
“We don’t ban cars because of crashes. We add seat belts. We add rules,” he said. “We need the same approach here.”
A study published in March by researchers from Dartmouth College tested the efficacy of a dedicated therapy chatbot. They concluded that the “therabot” — a large language model being designed and trained by scientific researchers to provide therapy — is a promising approach to addressing a global therapist shortage and delivering personalized mental health interventions.
Sakata said that if the emerging cases of A.I.-related mental health episodes are taken seriously and addressed early, we might be able to avoid the kind of mental health crisis that children who grew up with social media are experiencing.
“We need to start asking about A.I. use the way we ask about alcohol or sleep,” Sakata said, noting research showing that both alcohol use (or withdrawal) and disrupted sleep can exacerbate psychosis.
“It’s still early,” he said. “But if we don’t act, the ethical, legal, and trust consequences could be huge for people and for the companies building this tech.”
Man when I saw this headline I thought for certain it was going to be a discussion of this article: https://www.astralcodexten.com/p/in-search-of-ai-psychosis. Wrong San-Francisco based psychiatrist discussing AI psychosis and folie a deux, I guess....
Strong concur with Derek here, we need more of this.
https://x.com/DKThomp/status/1960373685329801590
"There’s a huge wide lane for Normie Health Thought, which is that healthy diets are v powerful, exercise is genuinely amazing, building muscle as you get older is awesome …. and also, drugs are great, GLP1s are a miracle, mRNA is cool, vaccines work, and supplements are mostly bullshit (except for creatine and a few others)
But for a variety of reasons, the online breakdown of health politics pits these two sides against each other, as if it makes any sense to have to choose between “lifting weights is good” and “the COVID vaccines worked”"