Last week, OpenAI startled the world by announcing that its long-awaited GPT-5 would replace all of its previous models,
The move sparked outrage. Apart from being severely underwhelmed by the performance of OpenAI's newest offering, power users immediately started to beg CEO Sam Altman to bring back preceding models, often for a reason that had little to do with intelligence, artificial or otherwise: they were attached to it on an emotional level.
"Why are we getting rid of the variants and 4o when we all have unique communication styles?" one Reddit user pleaded during an Ask Me Anything with Altman and the GPT-5 team last week.
The sentiment was so overwhelming that Altman caved almost immediately, declaring just over 24 hours after the GPT-5 announcement that the "deprecated" GPT-4o model would be made available once more.
"Ok, we hear you all on 4o; thanks for the time to give us the feedback (and the passion!)" Altman responded.
"We are going to bring it back for Plus users, and will watch usage to determine how long to support it," he added, referring to the company's paid ChatGPT Plus subscription service.
The community was so desperate to get 4o back, though, that Reddit users continued to plead with Altman.
"Would you consider offering GPT-4o for as long as possible rather than just 'we’ll think about how long to offer it for?'" one user wrote.
The incident highlights just how attached — both emotionally and productively — ChatGPT users have become to the service. Numerous users have even gotten sucked into severe mental health crises engendered by the bots that psychiatrists are now dubbing "AI psychosis."
The trend is something Altman appears to be aware of. In a lengthy tweet on Sunday, the billionaire expressed his views on the matter.
"If you have been following the GPT-5 rollout, one thing you might be noticing is how much of an attachment some people have to specific AI models," he wrote. "It feels different and stronger than the kinds of attachment people have had to previous kinds of technology (and so suddenly deprecating old models that users depended on in their workflows was a mistake)."
Altman revealed that the kind of unprecedented levels of attachment to OpenAI's models was being closely tracked by the firm "for the past year or so."
"People have used technology including AI in self-destructive ways; if a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that," the CEO tweeted. "Most users can keep a clear line between reality and fiction or role-play, but a small percentage cannot."
While some people were "getting value" from using "ChatGPT as a sort of therapist or life coach," he wrote, others were being "unknowingly nudged away from their longer term well-being (however they define it.)"
Altman notably stopped short of using the word "addiction" to describe people's obsession with the tool.
"It’s also bad, for example, if a user wants to use ChatGPT less and feels like they cannot," he wrote, admitting that a future where "people really trust ChatGPT’s advice for their most important decisions" makes him "uneasy."
But beyond arguing that OpenAI, which is eyeing an astronomical $500 billion valuation, has a "good shot at getting this right," Altman offered little in terms of real-world solutions.
"We have much better tech to help us measure how we are doing than previous generations of technology had," he wrote. "For example, our product can talk to users to get a sense for how they are doing with their short- and long-term goals, we can explain sophisticated and nuanced issues to our models, and much more."
The topic of users getting too attached has seemingly been top of mind for the AI company. In an August 4 blog post, OpenAI admitted that "there have been instances where our 4o model fell short in recognizing signs of delusion or emotional dependency."
"While rare, we're continuing to improve our models and are developing tools to better detect signs of mental or emotional distress so ChatGPT can respond appropriately and point people to evidence-based resources when needed," the company wrote.
How Altman's latest reassurances will play out in real-life product updates remains to be seen. OpenAI's public response so far to users growing increasingly attached to its AI models has left a lot to be desired.
Last week, the company claimed it had rolled out an "optimization" in the form of vaguely-worded commitments to "better detect signs of emotional distress" and nudging users with "gentle reminders during long sessions to encourage breaks."
For months, OpenAI has also been giving out the same copy-pasted statement to news outlets, saying that the "stakes are higher" as a result of ChatGPT feeling "more responsive and personal than prior technologies, especially for vulnerable individuals."
Earlier this year, OpenAI was forced to roll back an update to its GPT-4o model after users noticed it was being far too "sycophant-y and annoying," in the words of Altman himself.
There's also a certain tension in OpenAI's response to the situation; addicted users, by definition, are fantastic for its engagement analytics, giving rise to perverse incentives that we've seen play out over the past decade on social media.
While OpenAI's sky-high expenditures are still eclipsing any hope of an imminent return on investment, subscribers are one of the very few sources of actual revenue for the firm — a reality highlighted by the fact that Altman caved almost immediately after the company's paying subscribers revolted last week.
More on OpenAI: Man Follows ChatGPT's Advice and Poisons Himself
Share This Article