"This isn’t fixable."
Two Lies and a Truth
It's no secret that AI chatbots like OpenAI's ChatGPT have a strong tendency to make stuff up. They're just as good at inventing facts as they are assisting you with work — and when they mix up the two, disaster can strike.
Whether the people creating AI can fix that issue remains up for debate, the Associated Press reports. Some experts, including executives who are marketing these tools, argue that these chatbots are doomed to forever cook up falsehoods, despite their makers' best efforts.
"I don’t think that there’s any model today that doesn’t suffer from some hallucination," Daniela Amodei, co-founder and president of Anthropic, maker of the AI chatbot Claude 2, told the AP.
"They’re really just sort of designed to predict the next word," he added. "And so there will be some rate at which the model does that inaccurately."
And that doesn't exactly bode well, considering tech companies are deeply invested in the tech. For instance, there's Google, which has been secretly pitching an AI-powered news generator to major newspapers. Other news outlets are already experimenting with the tech, producing AI-generated content that's often been rife with inaccuracies.
In other words, without the ability for chatbots to correct their strong tendency to make stuff up, companies could be looking at major setbacks as they explore new ways to make use of the tech.
"This isn’t fixable," Emily Bender, a linguistics professor and director of the University of Washington’s Computational Linguistics Laboratory, told the AP. "It’s inherent in the mismatch between the technology and the proposed use cases."
According to Bender, it's only "by chance" that generated text "happens to be interpretable as something we deem correct."
Tech leaders, however, are optimistic — which isn't exactly surprising, considering billions of dollars are on the line.
"I think we will get the hallucination problem to a much, much better place," OpenAI CEO Sam Altman told an audience in India earlier this year. "I think it will take us a year and a half, two years. Something like that."
Companies like OpenAI and Anthropic are now caught up in an uphill battle. If there's one thing for certain, it's that getting chatbots to reliably tell the truth will be anything but easy — if it's possible at all, that is.
More on AI chatbots: OpenAI Competitor Says Its Chatbot Has a Conscience
Share This Article