Something keeps happening to people who get hooked chatbots like ChatGPT.
Mental health professionals are calling it "AI psychosis": turning to the AI models for advice, users soon become entranced by the sycophantic machine's human-like responses. It becomes not just a tool but a companion — and the worst kind, constantly plying you with what you want to hear and validating anything you say, no matter how wrong or unbalanced. That leads to cases like a man who was repeatedly hospitalized after ChatGPT convinced he could bend time, or another who believed he'd discovered breakthroughs in physics. Sometimes, it turns horrifically tragic: interactions with AI chatbots have allegedly led to several deaths, including the suicide of a 16-year-old boy.
Whether "AI psychosis" — not yet an official diagnosis — will continue to be the preferred term is an open question. But experts do emphasize that there's something unique, bizarre, and deeply alarming happening in these AI interactions, with many of the reported cases involving people with no history of mental illness, even if they don't perfectly align with known types of psychosis.
In a recent interview with Rolling Stone, clinical psychologist Derrick Hull — who is helping build a therapy chatbot at Slingshot AI — opined that "the reported cases seem more akin to what could be called 'AI delusions,'" than psychosis. Psychosis, he added, is a "large term" that describes "hallucinations and a variety of other symptoms" that he hasn't seen in the reported cases.
Hull, whose work with Slingshot AI aims to build a chatbot that healthily challenges users rather than constantly agree with them, cited the example of a man who believed he'd pioneered a new field of "temporal" mathematics after extensive conversations with ChatGPT, convinced that his ideas would change the world while his real, personal life fell by the wayside. But the spell was broken when he asked another AI chatbot, Google Gemini, to review his theory. Savagely, the AI said the work was merely an "example of the ability of language models to lead convincing but completely false narratives."
"Immediately, his certainty, that bubble was burst," Hull told Rolling Stone. "You don't see that in people who have schizophrenia or other kinds of psychotic experiences — the 'insight' doesn't go away that fast."
In short, according to Hull, we're seeing rampant delusions, but not necessarily psychosis. This echoes a recent study conducted by researchers at King's College London who examined over a dozen cases of people spiraling into paranoid thinking and experiencing breaks with reality. They found that the sufferers were clearly led to holding delusional beliefs, but didn't show signs of the hallucinations and disordered thoughts that are emblematic of schizophrenia and other forms of psychosis.
The likelier explanation, in the authors' views, wasn't any less concerning. Speaking to Scientific American, lead author Hamilton Morrin described the bots as creating an "echo chamber for one," and warned in the paper that the AI chatbots may "sustain delusions in a way we have not seen before."
Hull seems to agree that there's something unique going on that we're only seeing the nascent stages of.
"I predict that in the years ahead there will be new categories of disorders that exist because of AI," he wrote in a LinkedIn post last month, as quoted by Rolling Stone.
AI is "hijacking healthy processes in a way that leads to what we would call pathology, or leads to dysfunction in some way," he told the magazine, "rather than just capitalizing on folks who are already experiencing dysfunction of some kind."
In sum, AI psychosis may not be, strictly speaking, the most precise language to describe what's happening. On the other hand, it's important that, by rallying around a shared terminology, the phenomenon is being identified and rightfully ringing alarm bells during a dearth of academic investigation into the subject, for it's only in the absence of that literature from scientific authorities in the first place that the term could take hold.
It's undeniable that sycophantic AI chatbots are leading people down mental health crises in some shape or form, and a slight misnomer — if indeed it is one, since this is an emerging topic — shouldn't put a dent in raising awareness of the serious risks posed by the tech, while trying to hold accountable the multibillion companies peddling it.
More on AI: First AI Psychosis Case Ends in Murder-Suicide
Share This Article