As OpenAI continues to try to tamp down on the chaos of GPT-5's disastrous release, CEO Sam Altman is throwing digs at the company's now-deprecated and controversial model.

Late last week, OpenAI finally rolled out GPT-5, the latest version of its flagship large language model. Without warning, the company replaced all previous versions with the latest model — which, in addition to being deeply underwhelming, proved to have a colder, less obsequious tone than its predecessor, GPT-4o.

The response from the company's users was swift and immediate. Many people, particularly those who seem to have developed an attachment or even addiction to the 4o model and its sycophantic style, reacted with frustration and distress to the abrupt change.

Less than a day later, GPT-4o was back — for paying customers, that is.

In a post to X-formerly-Twitter last night, Altman affirmed that 4o is "back in the model picker for all paid users by default." He also promised that if OpenAI ever does kill off the obsequious 4o model for good, the company will "give plenty of notice."

And as for GPT-5's colder persona, Altman promised that would change too, writing that OpenAI is working on "an update to GPT-5's personality which should feel warmer than the current personality but not as annoying (to most users) as GPT-4o."

The move appears to highlight Altman and OpenAI's awareness of how hooked a large enough faction of its user base is to AI sycophancy, not to mention how willing the AI company is to acquiesce to those users' outrage.

That's striking, given that sycophancy has contributed to users experiencing deep emotional enmeshment with ChatGPT, AI-fueled delusional spirals, and full-blown breaks from reality — a serious issue linked to functions of the tech that go beyond the model simply being "annoying" to some users.

Altman rounded out the post by announcing that one of OpenAI's biggest takeaways from the GPT-5 launch is that "we really just need to get to a world with more per-user customization of model personality."

In other words, the CEO thinks that users should have greater control over how their chatbots' tone, attitude, and style.

It's a shift that would likely result in an even more hyperpersonalized user experience of ChatGPT, making it an eyebrow-raising position for the CEO to take.

Sure, users will always have preferences. But if those preferences are contributing to unhealthy use and dependency, should it be up to users to design their own customized drug?

More on AI dependency: Looking at This Subreddit May Convince You That AI Was a Huge Mistake


Share This Article