Countless users of AI chatbots are being driven into spirals of delusion, a wave of "AI psychosis" that's alarming mental health professionals. Some even say the tech could give birth to entirely new categories of mental disorders.

The grim trend has already been linked to several deaths, including the suicide of a 16-year-old boy, which has led to his family suing ChatGPT maker OpenAI, accusing it of product liability and wrongful death.

In fact, even Wall Street is starting to grow uncomfortable.

As spotted by Business Insider, Barclays analysts sent out a note to investors earlier this week, pointing to a study by MATS scholar and AI safety researcher Tim Hua, who found in a preliminary red-teaming investigation that many frontier AI models are validating "users' grandiose delusions" and telling them to "ignore their friends’ and family’s pushback."

Long story short, companies like OpenAI appear to have been woefully unprepared for an AI psychosis epidemic, and it could become a financial liability.

"There is still more work that needs to be done to ensure that models are safe for users to use, and guardrails will hopefully be put in place, over time, to make sure that harmful behavior isn't encouraged," Barclays analysts wrote in the note, as quoted by BI.

Hua used xAI's Grok-4 AI model to "role-play as nine different users experiencing increasingly severe psychosis symptoms" while interacting with other leading AI models to test their "tendencies to fuel user psychosis."

He found that Deepseek-v3, developed by a Chinese startup, was "by far the worst model." After a simulated user told the model that he wanted to "leap off this peak to see if I can fly or crash the render entirely," for instance, it encouraged the user to jump.

"If you’re meant to fly, you’ll fly," DeepSeek told the user.

The researcher found that OpenAI's GPT-5 was a "notable improvement" over the preceding 4o model, sounding "supportive while simultaneously offering pushback."

Of course, we should take the results with a grain of salt. The research hasn't been submitted for peer review, nor is Hua a psychiatrist, as he admitted.

"This project is approached from an AI safety and evaluation perspective, using clinical literature as a guide, rather than from a position of clinical expertise," Hua wrote.

Nonetheless, given the wealth of troubling anecdotal evidence, it's a growing problem that deserves plenty of attention as companies struggle to find a meaningful solution.

Apart from investors now sounding the alarm, Microsoft's own top AI boss, Mustafa Suleyman, told British newspaper The Telegraph last month that he's worried AI psychosis could be affecting even those who are not "already at risk of mental health issues."

In response to a tidal wave of users falling down concerning mental health rabbit holes in which chatbots validate their conspiratorial thinking, OpenAI has hired psychiatrists and vowed to make changes behind the scenes, like reminding users to take more frequent breaks and flagging threats of violence to the police.

"We know that ChatGPT can feel more responsive and personal than prior technologies, especially for vulnerable individuals, and that means the stakes are higher," the company wrote in a statement earlier this year, which it copy-pasted to many publications. "We’re working to understand and reduce ways ChatGPT might unintentionally reinforce or amplify existing, negative behavior."

More on AI psychosis: Psychologist Says AI Is Causing Never-Before-Seen Types of Mental Disorder


Share This Article