Earlier this month, OpenAI CEO Sam Altman announced that the company would be reinstating its GPT-4o model, just over 24 hours after declaring that its newfangled GPT-5 would be accompanied by the "deprecating" of all previous models.
The scale of the blowback from angry users was staggering. Those who had become accustomed to the "sycophantic" tone of GPT-4o, which sometimes lavished praise even on users' terrible ideas, were taken aback by GPT-5's "cold" brashness and short answers, highlighting just how emotionally attached many of them had become.
Beyond reinstating older models to paying subscribers, OpenAI continues to bow to the pressure, tweeting on Friday that it would be "making GPT-5 warmer and friendlier based on feedback that it felt too formal before."
"Changes are subtle, but ChatGPT should feel more approachable now," it added.
It's a notable admission, highlighting a growing mental health crisis brought on by AI chatbots. We've come across countless instances of users spiraling into severe delusions as the bots affirm paranoid or conspiratorial beliefs, with experts warning that many people — particularly youth and those who feel lonely — are losing themselves to virtual companions.
Given what went down when the company attempted to deprecate its more sympathetic AI model earlier this month, OpenAI is now forced to walk a tightrope, with corporate interests encouraging it to keep its users addicted, while also navigating a growing PR nightmare.
"People have used technology including AI in self-destructive ways; if a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that," the CEO tweeted on August 10. "Most users can keep a clear line between reality and fiction or role-play, but a small percentage cannot."
In its latest update, OpenAI promised that it would be making subtle changes to avoid a repeat of GPT-4o and the levels of obsession that followed.
"You'll notice small, genuine touches like 'Good question' or 'Great start,' not flattery," the company tweeted. "Internal tests show no rise in sycophancy compared to the previous GPT-5 personality." (It didn't unpack how praising users' inputs is different from sycophancy.)
Whether that will be enough to keep what psychiatrists are now calling "AI psychosis" to a minimum remains to be seen. Critics argue that OpenAI is acting in its own self-interest. After all, keeping people hooked to their chatbots is good for its bottom line, whether it's triggering mental breakdowns or not.
"The real 'alignment problem' is that humans want self-destructive things & companies like OpenAI are highly incentivized to give it to us," writer and podcaster Jasmine Sun tweeted.
Meanwhile, the subject of sycophancy has proven extremely divisive among OpenAI's power users.
"Damn, this subreddit is having a serious crisis on what they want GPT-5 to be and what they don't want it to be," one user commented on a post in the OpenAI subreddit that discussed the company's recent updates to GPT-5.
Others continue to lament the loss of GPT-4o, which already became roiled in controversy in April when OpenAI was forced to roll back an update that dialed the model's brown-nosing to eleven.
"What GPT-4o had — its depth, emotional resonance, and ability to read the room — is fundamentally different from the surface-level 'kindness' GPT-5 is now aiming for," an X user, who's openly advocating for reinstating the previous model, wrote. This isn’t kindness that radiates from the heart — it’s kindness as a label."
More on GPT-5: There's a Compelling Theory Why GPT-5 Sucks so Much
Share This Article