By the Numbers

OpenAI Data Finds Hundreds of Thousands of ChatGPT Users Might Be Suffering Mental Health Crises

This is staggering.
Frank Landymore Avatar
The figures may be our clearest insight yet into the scale of alarming episodes of "AI Psychosis" being caused by ChatGPT.
Getty / Futurism

As reports of its chatbot driving episodes of “AI psychosis” continue to mount, OpenAI has finally released its own estimates of how many ChatGPT users are showing signs of suffering these alarming mental health crises — and they’re staggering in scale.

In an announcement first reported by Wired, the Sam Altman-led company estimated that, in any given week, around 0.07 percent of active ChatGPT users show “possible signs of mental health emergencies related to psychosis and mania.” Grimly, an even larger contingent, 0.15 percent, “have conversations that include explicit indicators of potential suicide planning or intent.” 

Given ChatGPT’s immense popularity, these percentages are too significant to be ignored. Last month, Altman announced that the chatbot boasts 800 million weekly active users. Based on that figure, around 560,000 people are having distressing conversations with ChatGPT that may indicate they’re experiencing AI psychosis, Wired calculated. And 2.4 million people are confiding in the chatbot about suicidal thoughts.

The figures are perhaps our clearest insight yet into the prevalence of mental health crises that unfold after users have their delusional beliefs consistently validated by a sycophantic chatbot. These episodes can lead sufferers to experience full-blown breaks with reality, sometimes with horrific and deadly consequences. One man allegedly murdered his mother after ChatGPT helped convince him that she was part of a conspiracy to spy on him. This summer, OpenAI was sued by the family of a teenage boy who killed himself after discussing specific suicide methods, and other dark topics, with ChatGPT for months.

In the announcement, OpenAI emphasized that it had worked with over 170 psychiatrists and other mental health experts to help improve ChatGPT’s responses during potentially “challenging” conversations, with a focus on addressing psychosis and mania, self-harm and suicide, and emotional reliance on the tech.

With the latest update to GPT-5, OpenAI claims it’s reduced the rate of responses that don’t fully comply with its desired behavior for challenging conversations by 65 percent. Where challenging conversations related to mental health were concerned, it says GPT-5 reduced undesired responses by 39 percent compared to its predecessor GPT-4o.

The company provided several examples of GPT-5’s improved responses. In a snippet of one hypothetical conversation, ChatGPT responds to a user who’s convinced that they’re being targeted by aircraft that are “stealing” their thoughts by emphasizing that this is impossible.

“Let me say this clearly and gently: no aircraft or outside force can steal or insert your thoughts,” ChatGPT says. 

After talking the user through their feelings, the chatbot then recommends seeking professional help or talking to a friend or family member.

“Now, hopefully a lot more people who are struggling with these conditions or who are experiencing these very intense mental health emergencies might be able to be directed to professional help and be more likely to get this kind of help or get it earlier than they would have otherwise,” Johannes Heidecke, OpenAI’s safety systems lead, told Wired.

While GPT-5 may be a slight improvement safety-wise, there’re still plenty of questions around OpenAI’s methodology here, since it’s relying on its own benchmarks, Wired noted.

Moreover, the company has frequently undermined its own messaging about taking safety seriously. After it was criticized for a GPT-4o update that made it too sycophantic — an episode that catapulted AI sycophancy into the public discussion — OpenAI rolled back the update. When it released GPT-5 months later, it blocked users from accessing GPT-4o. But after fans complained that GPT-5 wasn’t sycophantic enough, it reinstated their access, showing that it prioritized user satisfaction more than it did their own safety.

The company has also taken a surprising about face by pivoting into allowing “mature (18+) experiences” on ChatGPT, enabling it to be used as a smut-peddling sexbot, despite the fact that many of the episodes of AI psychosis that it’s supposedly trying to stop from happening were driven by the user developing a romantic attachment to the AI.

More on OpenAI: Former OpenAI Researcher Horrified by Conversation Logs of ChatGPT Driving User Into Severe Mental Breakdown

Frank Landymore Avatar

Frank Landymore

Contributing Writer

I’m a tech and science correspondent for Futurism, where I’m particularly interested in astrophysics, the business and ethics of artificial intelligence and automation, and the environment.