
While many working people are reasonably worried about AI taking their jobs and leaving them on the street, another consequence of the AI revolution is filling seats in mental health facilities.
The mass adoption of large language model (LLM) chatbots is resulting in large numbers of mental health crises centered around AI use, in which people share delusional or paranoid thoughts with a product like ChatGPT — and the bot, instead of recommending that the user get help, affirms the unbalanced thoughts, often spiraling into marathon chat sessions that can end in tragedy or even death.
New reporting by Wired, drawing on more than a dozen psychiatrists and researchers, calls it a “new trend” growing in our AI-powered world. Keith Sakata, a psychiatrist at UCSF, told the publication he’s counted a dozen cases of hospitalization in which AI “played a significant role” in “psychotic episodes” this year alone.
Sakata is one of many mental health professionals at the front lines of the urgent and poorly understood health crisis stemming from relationships with AI, which doesn’t yet have a formal diagnosis, but which psychiatrists are already calling “AI psychosis,” or “AI delusional disorder.”
Hamilton Morrin, a psychiatric researcher at King’s College in London, told The Guardian that he was inspired to co-author a research article on AI’s effect on psychotic disorders after encountering patients who had developed psychotic illness while using LLM chatbots.
Yet another mental health professional wrote a column in the Wall Street Journal after patients began bringing their AI chatbots into therapy sessions unprompted.
While a rigorous case study of AI’s impact on mental health patient loads has yet to be attempted, what we know so far isn’t looking great.
A recent preliminary survey of AI-related psychiatric impacts by social work researcher Keith Robert Head points to a coming society-wide crisis brought on by “unprecedented mental health challenges that mental health professionals are ill-equipped to address.”
“We are witnessing the emergence of an entirely new frontier of mental health crises as AI chatbot interactions begin producing increasingly documented cases of suicide, self-harm, and severe psychological deterioration that were previously unprecedented in the internet age,” Head writes.
Indeed, the stories emerging so far are grim. While there remains something of a debate whether LLM chatbots are causing delusional behavior or simply reinforcing it, real-life stories paint a disturbing picture.
Some involve people with a history of mental health problems, who were managing their symptoms effectively before a chatbot entered their lives. In one case, a woman who had been treating her schizophrenia with medications for years became convinced by ChatGPT that the diagnosis was a lie. She soon went off her prescription and spiraled into a delusional episode, which arguably wouldn’t have happened without the chatbot.
Other anecdotes suggest that people with no history of mental health issues are falling victim to AI delusions. Recently, a longstanding OpenAI investor and successful venture capitalist became convinced by ChatGPT that he had discovered a “non-governmental system” that was targeting him personally — in terms, online observers quickly noticed, that appeared to be drawn from popular fan fiction.
Another disturbing tale involved a father of three with no history of mental illness spiraling into an apocalyptic delusion after ChatGPT convinced him he had discovered a new type of math.
One thing’s for sure: a flood of new psychiatric patients is the last thing our rapidly decaying mental health infrastructure needs.
More on chatbot psychosis: ChatGPT Is Blowing Up Marriages as Spouses Use AI to Attack Their Partners