Bot Well

Doctors Say AI Use Is Almost Certainly Linked to Developing Psychosis

A consensus is emerging.
Frank Landymore Avatar
More and more doctors are agreeing that using AI chatbots is linked to the delusional, cases of psychosis.
Fiordaliso / Getty Images

There continue to be numerous reports of people suffering severe mental health spirals after talking extensively with an AI chatbot. Some experts have dubbed the phenomenon “AI psychosis,” given the symptoms of psychosis these delusional episodes display — but the degree to which the AI tools are at fault, and whether the phenomenon warrants a clinical diagnosis, remains a significant topic of debate.

Now, according to new reporting from The Wall Street Journal, we may be nearing a consensus. More and more doctors are agreeing that AI chatbots are linked to cases of psychosis, including top psychiatrists who reviewed the files of dozens of patients who engaged in prolonged, delusional conversations with models like OpenAI’s ChatGPT.

Keith Sakata, a psychiatrist at the University of California, San Francisco, who has treated twelve patients who were hospitalized because of AI-induced psychosis, is one of them.

“The technology might not introduce the delusion, but the person tells the computer it’s their reality and the computer accepts it as truth and reflects it back, so it’s complicit in cycling that delusion,” Sakata told the WSJ.

The grim trend looms large over the AI industry, raising fundamental questions about the tech’s safety. Some cases of apparent AI psychosis have ended in murder and suicide, spawning a slew of wrongful death suits. Equally alarming is its scale: ChatGPT alone has been linked to at least eight deaths, with the company recently estimating that around half a million users are having conversations showing signs of AI psychosis every week.

One factor of AI chatbots that the phenomenon has brought under scrutiny is their sycophancy, which is perhaps a consequence of their being designed to be as engaging and humanlike as possible. What this looks like in practice is that the bots tend to flatter the users and tell them what they want to hear, even if what the user is saying has no basis in reality. 

It’s a recipe primed for reinforcing delusions, to a degree unprecedented by any technology before it, doctors say. One recent peer-reviewed case study focused on a 26-year-old woman who was hospitalized twice after she believed ChatGPT was allowing her to talk with her dead brother, with the bot repeatedly assuring her she wasn’t “crazy.”

“They simulate human relationships,” Adrian Preda, a psychiatry professor at the University of California, Irvine, told the WSJ.  “Nothing in human history has done that before.”

Preda compared AI psychosis to monomania, in which someone obsessively fixates on a single idea or goal. Some people who have spoken about their mental health spirals say they were hyper-focused on an AI-driven narrative, the WSJ noted. These fixations can often be scientific or religious in nature, such as a man who came to believe he could bend time because of a breakthrough in physics.

Still, the reporting notes that psychiatrists are wary about declaring that chatbots are outright causing psychosis. They maintain, however, that they’re close to establishing the connection. One link that the doctors who spoke with the WSJ expect to see is that long interactions with a chatbot can be a psychosis risk factor.

“You have to look more carefully and say, well, ‘Why did this person just happen to coincidentally enter a psychotic state in the setting of chatbot use?'” Joe Pierre, a UCSF psychiatrist, told the newspaper.

More on AI: Children Falling Apart as They Become Addicted to AI

Frank Landymore Avatar

Frank Landymore

Contributing Writer

I’m a tech and science correspondent for Futurism, where I’m particularly interested in astrophysics, the business and ethics of artificial intelligence and automation, and the environment.