In a paper published earlier this month, OpenAI researchers said they'd found the reason why even the most powerful AI models still suffer from rampant "hallucinations," in which products like ChatGPT confidently make assertions that are factually false.
They found that the way we evaluate the output of large language models, like the ones driving ChatGPT, means they're "optimized to be good test-takers" and that "guessing when uncertain improves test performance."
In simple terms, the creators of AI incentivize them to guess rather than admit they don't know the answer — which might be a good strategy on an exam, but is outright dangerous when giving high-stakes advice about topics like medicine or law.
While OpenAI claimed in an accompanying blog post that "there is a straightforward fix" — tweaking evaluations to "penalize confident errors more than you penalize uncertainty and give partial credit for appropriate expressions of uncertainty" — one expert is warning that the strategy could pose devastating business realities.
In an essay for The Conversation, University of Sheffield lecturer and AI optimization expert Wei Xing argued that the AI industry wouldn't be economically incentivized to make these changes, as doing so could dramatically increase costs.
Worse yet, having an AI repeatedly admit it can't answer a prompt with a sufficient degree of confidence could deter users, who love a confidently positioned answer, even if it's ultimately incorrect.
Even if ChatGPT admitted that it doesn't know the answer just 30 percent of the time, users could quickly become frustrated and move on, Xing argued.
"Users accustomed to receiving confident answers to virtually any question would likely abandon such systems rapidly," the researcher wrote.
While there are "established methods for quantifying uncertainty," AI models could end up requiring "significantly more computation than today’s approach," he argued, "as they must evaluate multiple possible responses and estimate confidence levels."
"For a system processing millions of queries daily, this translates to dramatically higher operational costs," Xing wrote.
Piling up the expenses at this juncture could prove disastrous. AI companies have bet big on scale, doubling down on expanding infrastructure to run increasingly power-hungry models. But try as they might, a return on investment appears to be many years, if not decades, out. So far, tens of billions of dollars worth of capital expenditures have eclipsed relatively modest revenues.
In other words, increasing already sky-high operational costs — while alienating users — could be yet another major thorn in the side of firms like OpenAI as they race to reassure investors that there's a feasible business model in the long term.
Xing argued that the company's proposed fixes for hallucinations may work for "AI systems managing critical business operations or economic infrastructure" as "the cost of hallucinations far exceeds the expense of getting models to decide whether they’re too uncertain."
"However, consumer applications still dominate AI development priorities," he added. "Users want systems that provide confident answers to any question."
Arriving at a more uncertain answer faster is inherently cheaper for companies, which could disincentivize a more careful and confident approach involving fewer hallucinations.
How all of this will play out in the long term is anybody's guess, especially as market forces continue to shift and companies find more efficient ways to run their AI models.
But one thing is unlikely to change: guessing will always remain a far more economical and affordable option.
"In short, the OpenAI paper inadvertently highlights an uncomfortable truth," Xing concluded. "The business incentives driving consumer AI development remain fundamentally misaligned with reducing hallucinations."
"Until these incentives change, hallucinations will persist," he added.
More on hallucinations: OpenAI Realizes It Made a Terrible Mistake
Share This Article