Move Fast and Break Teens

Former OpenAI Insider Says It’s Failed Its Users

"People deserve more than just a company’s word that it has addressed safety issues."
Victor Tangermann Avatar
Former OpenAI safety researcher Steven Adler argued that OpenAI isn't doing enough to mitigate users' severe mental health issues.

Content warning: this story includes discussion of self-harm and suicide. If you are in crisis, please call, text or chat with the Suicide and Crisis Lifeline at 988, or contact the Crisis Text Line by texting TALK to 741741.

Earlier this year, when OpenAI released GPT-5, it made a strident announcement: that it was shutting down all previous models.

There was immense backlash, because users had become emotionally attached to the more “sycophantic” and warm tone of GPT-5’s predecessor, GPT-4o. In fact, OpenAI was forced to reverse the decision, bringing back 4o and making GPT-5 more sycophantic.

The incident was symptomatic of a much broader trend. We’ve already seen users getting sucked into severe mental health crises by ChatGPT and other AI, a troubling phenomenon experts have since dubbed “AI psychosis.” In a worst-case scenario, these spirals have already resulted in several suicides, with one pair of parents even suing OpenAI for playing a part in their child’s death.

In a new announcement this week, the Sam Altman-led company estimated that a sizable proportion of active ChatGPT users show “possible signs of mental health emergencies related to psychosis and mania.” An even larger contingent were found to have “conversations that include explicit indicators of potential suicide planning or intent.”

In an essay for the New York Times, former OpenAI safety researcher Steven Adler argued that OpenAI isn’t doing enough to mitigate these issues, while succumbing to “competitive pressure” and abandoning its focus on AI safety.

He criticized Altman for claiming that the company had “been able to mitigate the serious mental health issues” with the use of “new tools,” and for saying the company will soon allow adult content on the platform.

“I have major questions — informed by my four years at OpenAI and my independent research since leaving the company last year — about whether these mental health issues are actually fixed,” Adler wrote. “If the company really has strong reason to believe it’s ready to bring back erotica on its platforms, it should show its work.”

“People deserve more than just a company’s word that it has addressed safety issues,” he added. “In other words: Prove it.”

To Adler, opening the floodgates to mature content could have disastrous consequences.

“It’s not that erotica is bad per se, but that there were clear warning signs of users’ intense emotional attachment to AI chatbots,” he wrote, recalling his time leading OpenAI’s product safety team in 2021. “Especially for users who seemed to be struggling with mental health problems, volatile sexual interactions seemed risky.”

OpenAI’s latest announcement on the prevalence of mental health issues was a “great first step,” Adler argued, but he criticized the company for doing so “without comparison to rates from the past few months.”

Instead of moving fast and breaking things, OpenAI, alongside its peers, “may need to slow down long enough for the world to invent new safety methods — ones that even nefarious groups can’t bypass,” he wrote.

“If OpenAI and its competitors are to be trusted with building the seismic technologies for which they aim, they must demonstrate they are trustworthy in managing risks today,” Adler added.

More on OpenAI: OpenAI Data Finds Hundreds of Thousands of ChatGPT Users Might Be Suffering Mental Health Crises

I’m a senior editor at Futurism, where I edit and write about NASA and the private space sector, as well as topics ranging from SETI and artificial intelligence to tech and medical policy.