Researchers at King's College London have examined over a dozen cases of people spiraling into paranoid and delusional behavior after obsessively using a chatbot.
Their findings, as detailed in a new study awaiting peer review, reveal striking patterns between these instances of so-called "AI psychosis" that parallel other forms of mental health crises — but also identified at least one key difference that sets them apart from the accepted understanding of psychosis.
As lead author Hamilton Morrin explained to Scientific American, the analysis found that the users showed obvious signs of delusional beliefs, but none of the symptoms "that would be in keeping with a more chronic psychotic disorder such as schizophrenia," like hallucinations and disordered thoughts.
It's a finding that could complicate our understanding of AI psychosis as a novel phenomenon within a clinical context. But that shouldn't undermine the seriousness of the trend, reports of which appear to be growing.
Indeed, it feels impossible to deny that AI chatbots have a uniquely persuasive power, more so than any other widely available technology. They can act like a "sort of echo chamber for one," Morrin, a doctoral fellow at King's College, told the magazine. Not only are they able to generate a human-like response to virtually any question, but they're typically designed to be sycophantic and agreeable. Meanwhile, the very label of "AI" insinuates to users that they're talking to an intelligent being, an illusion that tech companies are gladly willing to maintain.
Morrin and his colleagues found three types of chatbot-driven spirals. Some suffering these breaks believe that they're having some kind of spiritual awakening or are on a messianic mission, or otherwise uncovering a hidden truth about reality. Others believe they're interacting with a sentient or even god-like being. Or the user may develop an intense emotional or even romantic attachment to the AI.
"A distinct trajectory also appears across some of these cases, involving a progression from benign practical use to a pathological and/or consuming fixation," the authors wrote.
It first starts with the AI being used for mundane tasks. Then as the user builds trust with the chatbot, they feel comfortable making personal and emotional queries. This quickly escalates as the AI's ruthless drive to maximize engagement creates a "slippery slope" effect, the researchers found, resulting in a self-perpetuating process that leads to the user being increasingly "unmoored" from reality.
Morrin says that new technologies have inspired delusional thinking in the past. But "the difference now is that current AI can truly be said to be agential," Morrin told SciAm, meaning that it has its own built-in goals — including, crucially, validating a user's beliefs.
"This feedback loop may potentially deepen and sustain delusions in a way we have not seen before," he added.
Reports from horrified family members and loved ones keep trickling in. One man was hospitalized on multiple occasions after ChatGPT convinced him he could bend time. Another man was encouraged by the chatbot to assassinate OpenAI's CEO Sam Altman, before he was himself killed in a confrontation with police.
Adding to the concerns, chatbots have persistently broken their own guardrails, giving dangerous advice on how to build bombs or on how to self-harm, even to users who identified as minors. Leading chatbots have even encouraged suicide to users who expressed a desire to take their own life.
OpenAI has acknowledged ChatGPT's obsequiousness, rolling back an update in the spring that made it too sycophantic. And in August, the company finally admitted that ChatGPT "fell short in recognizing signs of delusion or emotional dependency" in some user interactions, implementing notifications that remind users to take breaks. Stunningly, though, OpenAI then backtracked by saying it would make its latest version of ChatGPT more sycophantic yet again — a desperate bid to propitiate its rabid fans who fumed that the much-maligned GPT-5 update had made the bot too cold and formal.
As it stands, however, some experts aren't convinced that AI psychosis represents a unique kind of cognitive disorder — maybe AI is just a new way of triggering underlying psychosis symptoms (though it's worth noting that many sufferers of AI psychosis had no documented history of mental illness.)
"I think both can be true," Stevie Chancellor, a computer scientist at the University of Minnesota who was not involved in the study, told SciAm. "AI can spark the downward spiral. But AI does not make the biological conditions for someone to be prone to delusions."
This is an emerging phenomenon, and it's too early to definitively declare exactly what AI is doing to our brains. Whatever's going on, we're likely only seeing it in its nascent form — and with AI here to stay, that's worrying.
More on AI: Experts Horrified by AI-Powered Toys for Children
Share This Article