Going on the Record

Sam Altman Lets Loose About AI Psychosis

"Almost a billion people use it and some of them may be in very fragile mental states."
Frank Landymore Avatar
OpenAI CEO Sam Altman vented about the attention the company received over ChatGPT leading users into psychosis.
Illustration by Tag Hartman-Simkins / Futurism. Source: Kevin Dietsch / Getty Images

As uneasy questions swirl over the safety of large language models, OpenAI CEO Sam Altman took to social media to go long on the phenomenon that psychiatrists are calling “AI psychosis” — though pointedly without mentioning it by name.

The extended spiel was provoked by his longtime rival Elon Musk, who had a grave warning in response to a post claiming that Altman’s chatbot has now been linked to at least nine deaths: “Don’t let your loved ones use ChatGPT,” Musk tweeted.

Altman hit back with palpable frustration.

“Sometimes you complain about ChatGPT being too restrictive, and then in cases like this you claim it’s too relaxed,” he fumed. “Almost a billion people use it and some of them may be in very fragile mental states.”

He vowed that OpenAI would do its best to balance the bot’s safety and usability, but insinuated that Musk was being opportunistic with his criticism, stating that “these are tragic and complicated situations that deserve to be treated with respect.”

“It is genuinely hard,” Altman reiterated. “We need to protect vulnerable users, while also making sure our guardrails still allow all of our users to benefit from our tools.”

To an extent, you can understand Altman’s exasperation at Musk calling the kettle black. A self-proclaimed free speech absolutist who frequently rails against “woke” ideology, the selling point of Musk’s chatbot Grok is that it’s unfiltered and supposedly censorship free, rarely refusing even the edgiest of requests. This has spawned controversies such as a posting spree in which it praised Nazis and styled itself “MechaHitler,” or more recently when it generated countless nonconsensual nudes of women and children — none of which have resulted in Grok being meaningfully reined in.

Going for the knockout blow, Altman pointed out the numerous deaths linked to Tesla’s self-driving technology, which he called “far from safe.”

“I won’t even start on some of the Grok decisions,” he added.

Still, one could also accuse Altman of not adequately reckoning with the severity of the phenomenon at hand, AI psychosis, in which users become entranced by the sycophantic responses of an AI chatbot and are sent down a delusional and often dangerous mental health spiral, sometimes culminating in suicide or murder. ChatGPT alone has been linked to at least eight deaths in lawsuits filed against OpenAI, and the chatbot maker has acknowledged that somewhere around 500,000 of its users are having conversations that show signs of psychosis every week.

Altman almost waves away these grim tolls as an inevitable consequence of the product’s popularity. And even its own alarming internal figures haven’t spurred the very concerned minds at OpenAI like Altman to pull or at least seriously muzzle their product. In fact, the company has continued to vacillate on its safety commitments, such as promising an smut-friendly “adult mode” after years of resisting the bot being used for more erotic outputs, or restoring access to its notoriously sycophantic GPT-4o model after fans complained GPT-5 was too cold and “lobotomized” — before making GPT-5 more sycophantic, too.

More on AI: Something Wild Happens to ChatGPT’s Responses When You’re Cruel To It

Frank Landymore Avatar

Frank Landymore

Contributing Writer

I’m a tech and science correspondent for Futurism, where I’m particularly interested in astrophysics, the business and ethics of artificial intelligence and automation, and the environment.