The long-awaited release of OpenAI's GPT-5 has gone over with a wet thud.
Though the private sector continues to dump billions into artificial intelligence development, hoping for exponential gains, the research community isn't convinced.
Speaking to The New Yorker, Gary Marcus, a neural scientist and longtime critic of OpenAI, said what many have been coming to suspect: despite years of development at a staggering cost, AI doesn't seem to be getting much better.
Though GPT-5 technically performs better on AI industry benchmarks — an already unreliable measure of progress, experts have argued — the critic argues that its use beyond anything other than a virtual chat-buddy remains unlikely. Worse yet, the rate at which new models grow against those dubious benchmarks appears to be slowing down.
"I don’t hear a lot of companies using AI saying that 2025 models are a lot more useful to them than 2024 models, even though the 2025 models perform better on benchmarks," Marcus told the magazine.
Since at least 2020, the researcher has been carrying water for a more practical approach to AI development, one with a much narrower focus than the current "general consumer" strategy.
In the US, tech companies like OpenAI and Anthropic have been focused on "scalable AI," a development approach that prioritizes rapid financial growth over useful tech. Even the term "scalable" as we know it has its roots in the world of finance capital.
In practice, this has meant plugging in as many graphics processing chips as possible, which requires more data centers, which require more energy, which requires more capital.
The payoff, OpenAI CEO Sam Altman theorized in 2021, should be near-exponential improvements to AI's capabilities — if you spend the money, there's no reason the tech can't get better. And if you spend enough money, you might just be able to unlock artificial general intelligence (AGI), the point where our little chatbots achieve human-level intelligence.
There's just one little wrinkle: the tech really isn't getting better.
Though Marcus's more realistic view of AI made him a pariah in the excitable AI community, he's no longer standing alone against scalable AI. Yesterday, University of Edinburgh AI scholar Michael Rovatsos wrote that "it is possible that the release of GPT-5 marks a shift in the evolution of AI which... might usher in the end of creating ever more complicated models whose thought processes are impossible for anyone to understand."
Earlier in March, a survey of 475 AI researchers concluded that AGI was a "very unlikely" outcome of the current development approach.
And as far back as 2023 — before even GPT-4o, let alone GPT-5 — Microsoft co-founder Bill Gates told the German publication Handelsblatt that scalable AI had "reached a plateau."
Several years later, even AI's ride-or-die backers in the financial sector are starting to come back down to Earth. Despite a better-than-expected second quarter for OpenAI's datacenter partner CoreWeave, Wall Street is beginning to doubt big tech's ability to deliver on its lofty goal of delivering AGI.
As a result, CoreWeave's stocks have plummeted 16 percent so far at the time of writing, which may be the first sign that AI's bloated carcass is starting to rupture.
More on AI: Beneath the AI Bubble, the Economy Looks Bleak
Share This Article