The chatbot wars — led by ChatGPT creator OpenAI, Bing/Sydney overlord Microsoft, and the very desperate-to-catch-up Google — are on, with Silicon Valley behemoths and the industry's biggest investors rushing to throw major dollars behind language-generating systems.

But according to a pair of experts in a scathing essay in Salon, the frothy hype cycle surrounding chatbot AIs is doomed to be what investors fear most: a bubble. When popped, they argue, it'll reveal Large Language Model (LLM) -powered systems to be much less paradigm-shifting technologies, and really just a whole lot of smoke and mirrors.

"The undeniable magic of the human-like conversations generated by GPT," write Gary N. Smith, the Fletcher Jones Professor of Economics at Pomona College, and Jeffrey Lee Funk, an independent technology consultant, "will undoubtedly enrich many who peddle the false narrative that computers are now smarter than us and can be trusted to make decisions for us."

"The AI bubble," they continue, "is inflating rapidly."

The experts' essay is rooted in the argument that a lot of investors simply just seem to fundamentally misunderstand the underlying technology behind the easily anthropomorphized language models. While the bots, particularly ChatGPT and the OpenAI-powered Bing Search, do sound impressively human, they're not actually synthesizing information, and thus fail to provide thoughtful, analytical, or usually even correct answers in return.

Instead, like the predictive text feature on smartphones or in email programs, they just predict what words might come next in a sentence. Every prompt response is a probability equation, as opposed to a demonstration of any real understanding of the material at hand, a reality of the underlying machinery that leads to the phenomenon of AI hallucination — a very serious failure of the tech made even more complicated by the machines' proclivity for sounding wildly confident, sometimes to the point of becoming combative, even when delivering incorrect answers.

"Trained on unimaginable amounts of text, they string together words in coherent sentences based on statistical probability of words following other words," Smith and Funk explain. "But they are not 'intelligent' in any real way — they are just automated calculators that spit out words."

Many AI optimists, on the other hand, have written off the burgeoning tech's sometimes funny, sometimes genuinely horrifying quips and errors as growing pains. They often argue that more data, which includes the free data granted to the machines by way of public use, will be the solution to the chatbots' fact-checking woes.

If that's the case, such would be a surface-level patch, rather than a fundamental flaw in the machinery. It's also a tempting narrative for AI investors, who would rather have bought a car with a flat tire than a car that simply can't function as promised at all. But according to Smith and Funk, more data could actually make the programs' very clear problems worse.

"Training it on larger databases will not solve the inherent problem: LLMs are unreliable because they do not know what words mean. Period," the experts argue.

"In fact,"' they add, "training on future databases that increasingly include the BS spouted by LLMs will make them even less trustworthy."

In other words: if, as many other experts fear, incorrect or otherwise misleading or harmful chatbot-made content — which is currently pretty cheap and easy to make — starts to clog up the internet, robots like ChatGPT and Bing Search will have an increasingly difficult time sorting through what's real and what isn't. Trustworthy information will be fewer and further between than ever, and in a way, the internet will become an imitation of itself.

It's a pessimistic take, sure, and maybe the AI optimists are right. Regardless, though, a lot of powerful people out there have and are continuing to pour big bucks into LLMs, not to mention a string of other generative AI technologies. For their own sake, they have a vested interest in making sure these AI products get into consumers' hands — nevermind whether the products are actually all that game-changing in the long run.

"That astonishing and sudden dip," write Smith and Funk, speaking to Google's massive loss in wealth following their embarrassing AI advertising ordeal, "speaks to the degree to which AI has become the latest obsession for investors."

"Yet their confidence in AI — indeed, their very understanding of and definition of it," they add, "is misplaced."

READ MORE: AI chatbots are having their "tulip mania" moment [Salon]

More on chatbots: OpenAI CEO Says AI Will Give Medical Advice to People Too Poor to Afford Doctors


Share This Article