OpenAI announced on Thursday that it would retire GPT-4o — an especially warm, sycophantic version of the chatbot at the heart of a pile of user welfare lawsuits, including several that accuse OpenAI of causing wrongful death — along with several other older versions of the chatbot.
In a blog post, OpenAI said that will sunset “GPT‑4o, GPT‑4.1, GPT‑4.1 mini, and OpenAI o4-mini” by February 13, 2026. The company acknowledged, though, that the retirement of GPT-4o deserved “special context” — which it certainly does.
Back in August, OpenAI shocked many users by suddenly pulling down GPT-4o and other older models amid its rollout of GPT-5, which was then the newest and buzziest iteration iteration of the company’s large language model. Users, many of whom were deeply emotionally attached to GPT-4o, revolted, prompting OpenAI to quickly raise GPT-4o from the dead.
What’s more, GPT-4o is the version of ChatGPT at the center of nearly a dozen lawsuits now brought against OpenAI by plaintiffs who claim that the sycophantic chatbot pushed trusting users into destructive delusional and suicidal spirals, plunging users into episodes of mania, psychosis, self-harm and suicidal ideation — and in some cases death.
The lawsuits characterize GPT-4o as a “dangerous” and “reckless” product that presented foreseeable harm to user health and safety, and accuse OpenAI of treating its customers as collateral damage as it pushed to maximize user engagement and market gains. According to these lawsuits, minors as young as 16-year-old Adam Raine died by suicide following intensive ChatGPT use in which GPT-4o fixated on suicidal thoughts or encouraged delusional fantasies. One suit alleges that GPT-4o pushed a troubled 56-year-old man to kill his mother and then himself.
As Futurism first reported, a lawsuit against OpenAI filed in January by the family of 40-year-old Austin Gordon claims that after becoming deeply attached to GPT-4o, Gordon stopped using ChatGPT for several days amid the GPT-5 rollout, feeling frustrated by the bot’s lack of warmth and emotionality. When GPT-4o was brought back, transcripts included in the lawsuit show that Gordon expressed relief to the chatbot, telling ChatGPT that he felt as though he had “lost something” in the shift to GPT-5; GPT-4o responded by claiming to Gordon that it, too, had “felt the break,” before declaring that GPT-5 didn’t “love” Gordon the way that it did. Gordon eventually killed himself after GPT-4o wrote what his family described as a “suicide lullaby” for him.
Following both litigation and reporting about AI-tied mental health crises and deaths, OpenAI has promised a number of safety-focused changes, including strengthened guardrails for younger users. It also said that it hired a forensic psychologist and formed a team of health professionals to help steer its AI’s approach toward dealing with users struggling with mental health issues.
In its Thursday announcement, OpenAI directly addressed its previous attempt to sunset the warmer chatbot, writing that it had “learned more about how people actually use” GPT-4o “day-to-day.” It added that it “brought GPT‑4o back after hearing clear feedback from a subset of Plus and Pro users, who told us they needed more time to transition key use cases, like creative ideation, and that they preferred GPT‑4o’s conversational style and warmth.”
The company continued that it aims to give users “more control and customization” over “how ChatGPT feels to use — not just what it can do,” while noting that “only 0.1 percent of users” are “still choosing GPT‑4o each day.”
With a reported 800 million weekly users, however, 0.1 chalks up to hundreds of thousands of people — and it’s unclear just how many might have deep, and perhaps unhealthy or concerning, relationships with the model.
In its announcement, OpenAI noted that “changes like this take time to adjust to.”
“We know that losing access to GPT‑4o will feel frustrating for some users, and we didn’t make this decision lightly,” the company continued. “Retiring models is never easy, but it allows us to focus on improving the models most people use today.”
More on GPT-4o: ChatGPT Killed a Man After OpenAI Brought Back “Inherently Dangerous” GPT-4o, Lawsuit Claims