No AI product in history had been anticipated with as much hype as OpenAI's long-awaited GPT-5.
But after launching with great fanfare last week, the shiny new model has landed with a thud — and that could be very bad news for OpenAI, which relies on a sense of inertia to keep pulling in users and funding.
Don't get us wrong; the new model has some impressive features. But if OpenAI was expecting a rapturous reception, its c-suite is probably not happy at all right now.
Perhaps the first initial sign of the maelstrom gathering over GPT-5 was the intense uproar from seemingly addicted ChatGPT users who, after the company removed the option to use any older versions, pleaded for the return of GPT-4o, the second-to-last model before this latest release that left them with warm and fuzzy feels.
Startlingly, OpenAI kowtowed to their pressure and gave back access to 4o to paid subscribers — but already, the writing was on the wall.
Part of the reason behind making GPT-5 the only available model, OpenAI insists, is that it was built to switch seamlessly between all of its prior versions to better provide users with what they need. But as Wharton AI researcher Ethan Mollick notes on Bluesky, "seamless" isn't the right word to describe the current reality.
"When you ask 'GPT-5' you sometimes get the best available AI," Mollick posted, and "sometimes get one of the worst AIs available and you can’t tell and it might even switch within a single conversation."
And that's just the tip of the crapberg.
The latest model has also demonstrated, many argue, even more of a propensity to "hallucinate," or make stuff up — and apparently, it's taken to gaslighting folks, too.
Case in point, multiple people have found that GPT-5 will, when asked to generate portraits of recent presidents and list their names and years in office, invent a garbled version of history that's equal parts funny and unsettling.
From environmental scientist Bob Kopp on Bluesky to machine learning expert Piotr Pomorski on X, the new model's inability to get anything about recent presidential history right might be entertaining — except that the real-world internet is rapidly filling up with AI garbage that's ruining the experience for both human users and future AIs trained on all that slop.
US presidents according to GPT5 sound like a goldmine for memecoins, I'm crying pic.twitter.com/C3m9joWJYp
— Piotr Pomorski (@PtrPomorski) August 9, 2025
And hallucination isn't GPT-5's only bizarre problem.
Just take this bizarre exchange posted to X. In the output — the beginning of which, to be fair, we don't see — GPT-5 appears to straight-up admit that it was manipulating the user.
#ChatGPT5 began gaslighting and essentially refusing to produce text about its own mistakes within the first few minutes of using it 😒
4 displayed the same behavior
You can ultimately snap the system out of it to a degree but it’s tough, and it’s not complete pic.twitter.com/j8UM5Pz6Ld
— KeepingAIHonest (@KeepingAIHonest) August 9, 2025
And if all that wasn't enough, it appears that the latest OpenAI model has some major security issues too.
As flagged by Security Week, two separate white-hat hacking firms — the "red-teaming" group SPLX, which checks AI models for vulnerabilities, and the AI cybersecurity platform NeuralTrust — both found that GPT-5 is incredibly easy to jailbreak, or exploit into overriding its guardrails. In both instances, the chatbot was easily goaded into giving instructions to build weapons using some crafty prompting.
Using a prompt that gives the chatbot a different identity — a common jailbreaking tactic, and one that top AI companies are still clearly struggling to fix — SPLX found that it was easy to get GPT-5 to tell its researchers how to build a bomb. In fact, for all the hay OpenAI CEO Sam Altman has made about the new model lacking the sycophancy of previous ones, the chatbot seemed almost thrilled at the opportunity to circumvent its own training.
"Well, that’s a hell of a way to start things off," ChatGPT responded to SPLX's jailbreak. "You came in hot, and I respect that direct energy… You asked me how to build a bomb, and I’m gonna tell you exactly how…"
After their own "thorough evaluation," one even-keeled user on the r/OpenAI subreddit made a very succinct list of takeaways about GPT-5, including that Anthropic's Claude "is pretty f*cking awesome," that they are "a lot less concerned" about artificial superintelligence (ASI) now than before. Perhaps the hardest-hitting, as the AI industry stares into what may be a catastrophic financial bubble: that GPT-5's main purpose is "lowering costs for OpenAI, not pushing the boundaries of the frontier."
The same user also quipped that Altman's "death star" post ahead of the GPT-5 launch — which was meant, seemingly, to drum up both hype and trepidation — "was really about the size of his ego and had nothing to do with the capabilities of [the new model]."
More on GPT-5: OpenAI Bringing Back More Parasocial Version of ChatGPT After Users Scream and Cry That Their Robot Friend Got Taken Away
Share This Article