Regression to Mean

Sam Altman Says Oops, They Accidentally Made the New Version of ChatGPT Worse Than the Previous One

"I think we just screwed that up."
Joe Wilkins Avatar
OpenAI CEO Sam Altman recently admitted that the team working on the latest version of ChatGPT screwed it up.
Illustration by Tag Hartman-Simkins / Futurism. Source: Andrew Harnik / Getty Images

It’s been a little over three years since the launch of the first commercially-available large language model (LLM) chatbot, OpenAI’s ChatGPT. And though the AI model has certainly made performance gains since it came online, the lackluster performance of recent iterations hasn’t helped the perception that LLMs are hitting a plateau.

Case in point, OpenAI CEO Sam Altman recently conceded that the company had “screwed up” the language capabilities of its latest chatbot iteration, GPT-5.2.

“I think we just screwed that up,” Altman said at a developer town hall on Monday. “We will make future versions of GPT 5.x hopefully much better at writing than 4.5 was.”

Continuing, Altman said that the company chose to focus on ChatGPT’s technical capabilities, perhaps to the detriment of its human-language performance.

“We did decide, and I think for good reason, to put most of our effort in 5.2 into making it super good at intelligence, reasoning, coding, engineering, that kind of thing,” Altman said. “And we have limited bandwidth here, and sometimes we focus on one thing and neglect another.”

The admission raises a high-stakes question: whether frontier AI models can continue to excel at tasks across the board, or if proficiency in one domain will start to come at the expense of a broader skill set.

As Search Engine Journal points out, the release of GPT-5.2 came with a huge emphasis on technical tasks like coding and formatting spreadsheets. Compared to past iterations, there was scarce mention of any writing or creative work at all, a pivot which has left many non-technical users feeling like ChatGPT is hitting a wall.

As data scientist and tech blogger Mehul Gupta pointed out in a review of GPT-5.2, there are plenty of signs that the LLM is backsliding, and some of them aren’t particularly subtle.

These include a “flatter tone,” worse translation capability, inconsistent behavior across tasks, and some major regression in “instant mode,” a setting meant to provide immediate answers to simple questions.

As Gupta writes, it also struggles with real-world tasks. When it comes to evaluating human documents like contracts, mixed-format notes or PDFs, GPT-5.2 “forgot earlier details, contradicted itself, misread cross-references, [and] hallucinated clarifications that didn’t exist.”

“Benchmarks are clean,” Gupta observed. “Real documents are not. 5.2 still struggles with the noise of reality.”

More on ChatGPT: Scientist Horrified as ChatGPT Deletes All His Research

Joe Wilkins Avatar

Joe Wilkins

Correspondent

I’m a tech and transit correspondent for Futurism, where my beat includes transportation, infrastructure, and the role of emerging technologies in governance, surveillance, and labor.