Mind Games

AI Is Causing a Grim New Twist on the Dunning-Kruger Effect, New Research Finds

AI users are lacking in self-awareness.
Frank Landymore Avatar
New research shows how AI tools is making a Dunning-Kruger specimen out of everyone that uses them, no matter how smart.
Illustration by Tag Hartman-Simkins / Futurism. Source: Getty Images

People who are the worst at doing something also tend to severely overestimate how good they are at doing it, while those who are actually skilled tend to not realize their true talent.

This galling cognitive bias is called the Dunning-Kruger effect, as you’re probably familiar — and would you believe it if we told you that AI appears to make it even worse? 

Case in point, a new study published in the journal Computers in Human Behavior — and titled, memorably, “AI Makes You smarter But None the Wiser”) — showed that everyone was bad at estimating their own performance after being asked to complete a series of tasks using ChatGPT. And strikingly, it was the participants who were “AI literate” who were the worst offenders.

“When it comes to AI, the [Dunning-Kruger effect] vanishes,” study senior author Robin Welsch, a professor at Aalto University, said in a statement about the work. “In fact, what’s really surprising is that higher AI literacy brings more overconfidence.”

“We would expect people who are AI literate to not only be a bit better at interacting with AI systems, but also at judging their performance with those systems,” Welsch added, “but this was not the case.”

It’s an interesting detail that helps build on our still burgeoning understanding of all the ways that our AI habits are probably bad for our brains, from being linked with memory loss to atrophying our critical thinking skills. Perhaps it’s also a testament to the ego of the AI power user.

Notably, the findings comes amid heated debate around the dangerous “sycophancy” of AI models. Chatbots designed to both be helpful and engaging constantly ply users with flattery and go along with their demands. It’s an addictive combination that makes you feel smart or vindicated. And this sycophancy is thought to be one of the main driving factors behind widespread cases of what psychiatrists are calling “AI psychosis,” in which users suffer breaks with reality and spirals into delusional thinking after becoming obsessed with talking with a chatbot.

In the study, the researchers asked half of 500 participants to use ChatGPT to help solve 20 logical reasoning problems from the Law School Admission Test, and the other half to solve them without AI. Afterwards, each participant was asked to evaluate their own performance, with the promise of extra compensation if they did so accurately. They were also administered a questionnaire designed to gauge their AI literacy.

The researchers found that the group that used ChatGPT substantially improved their scores compared to the group that didn’t. But they also vastly overestimated their performance — and the effect was especially pronounced among the AI savvy, “suggesting that those with more technical knowledge of AI were more confident but less precise in judging their own performance,” the authors wrote.

When they examined how the participants used the chatbot, the team also discovered that the majority of them rarely asked ChatGPT more than one question per problem — with no further probing or double checking. According to Welsch, this is an example of what psychiatrists call cognitive offloading, a well documented trend in AI in which users outsource all their thinking to an AI tool.

“We looked at whether they truly reflected with the AI system and found that people just thought the AI would solve things for them,” Welsch said. “Usually there was just one single interaction to get the results, which means that users blindly trusted the system.”

You’ve got to hand it to AI: it’s democratizing the Dunning-Kruger effect. What other tech can claim to do that?

More on AI: Character.AI, Accused of Driving Teens to Suicide, Says It Will Ban Minors From Using Its Chatbots