"What the large language models are good at is saying what an answer should sound like, which is different from what an answer should be."

AI'm With Stupid

Robotics researcher and AI expert Rodney Brooks is arguing that we've been vastly overestimating OpenAI's large language models, on which its blockbuster chatbot ChatGPT is based on.

In a terrific interview with IEEE Spectrum, Brooks argues that these tools are a lot stupider than we realize, not to mention a very far cry from being able to compete with humans at any given task on an intellectual level. Overall, he says, we're guilty of a lot of sins of poorly predicting the future of AI.

Long story short, is AI poised to become the sort of artificial general intelligence (AGI) that could operate at a similar intellectual level to humans?

"No, because it doesn’t have any underlying model of the world," Brooks told the publication. "It doesn’t have any connection to the world. It is correlation between language."

Reality Check

Brooks' comments serve as a valuable reminder of the current limitations plaguing AI tech, and how easy it is to imbue their output with meaning even though they were engineered to simply sound — rather than reason — like humans.

"We see a person do something, and we know what else they can do, and we can make a judgment quickly," he told IEEE Spectrum. "But our models for generalizing from a performance to a competence don’t apply to AI systems."

In other words, current language models aren't able to logically infer meaning, despite them making it sound like they can — which can easily mislead the user.

"What the large language models are good at is saying what an answer should sound like, which is different from what an answer should be," Brooks said.

Completely Wrong

The researcher said that he's been experimenting with large language models to help him with "arcane coding" — but ran into some serious trouble.

"It gives an answer with complete confidence, and I sort of believe it," Brooks told IEEE Spectrum. "And half the time, it’s completely wrong. And I spend two or three hours using that hint, and then I say, 'That didn’t work,' and it just does this other thing."

"Now, that’s not the same as intelligence," he added. "It’s not the same as interacting. It’s looking it up."

In short, Brooks believes future iterations of the tech could end up in some interesting places — "but not AGI."

And given the risks involved in having an AI system supersede the intelligence of a human being, it's probably better that way.

More on ChatGPT: ChatGPT Happy to Write Smut About Freakishly Obscure Sex Act


Share This Article