Researchers have found that trust in artificial intelligence falls among people as they become more AI literate — a damning revelation that highlights persistent skepticism in the tech.
AI companies continue to paint the tech as a mesmerizing, revolutionary inflection point for humanity that justifies enormous capital expenditures to run wildly resource-intensive AI models.
But when real-life users become more familiar with the tech — realizing that, at their core, products like ChatGPT are word prediction algorithms rather than human-like sentient entities — it can be a major turnoff, as the Wall Street Journal reports.
As detailed in a study published in the Journal of Marketing earlier this year, an international team of researchers found that the AI's biggest fans tend to be the people with the shallowest familiarity with it.
"Contrary to expectations revealed in four surveys, cross-country data and six additional studies find that people with lower AI literacy are typically more receptive to AI," they wrote, proposing that "people with lower AI literacy are more likely to perceive AI as magical and experience feelings of awe in the face of AI's execution of tasks that seem to require uniquely human attributes."
It's an especially pertinent topic due to the widespread use of the tech among students, who may lack the literacy to make informed decisions on when or how to use AI — and employ it as a crutch to avoid learning deeper reasoning, writing, and research skills of their own. Of course, those students are likely to become even more reliant on companies like OpenAI as they age and enter the workforce.
"When you don’t really get what’s going on under the hood, AI creating these things seems amazing, and that’s when it can feel magical," University of Southern California associate professor of marketing Stephanie Tully told the WSJ. "And that feeling can actually increase people’s willingness to use it."
The findings should serve as a wake-up call for the industry. Instead of leading to higher adoption, those who are more clued in to how the tech works are less likely to use it, flying in the face of the assumption that greater technical knowledge will lead to wider adoption.
"In other domains, like wine, the people who know the most about it are wine lovers," Tully told the WSJ. "With AI, it’s the opposite."
In an experiment, the researchers gave 234 undergraduate students a questionnaire, asking them whether they would use AI to help with writing four different papers.
Those who scored lower on AI literacy were more willing to use the tech to complete the assignments. That's despite them being more concerned about AI ethics and its potential to impact humanity negatively.
"Understanding that AI is just pattern-matching can strip away the emotional experience," coauthor and George Washington University assistant professor of marketing Gil Appel told the WSJ.
The team corroborated their own findings by pointing to several other studies that also showed lower AI literacy was associated with greater willingness to use the tech.
As a result, the researchers argue that users should be educated about how AI works so they can make better-informed decisions.
"With the increase in AI around us, consumers should have a basic level of literacy to be able to understand when AI might have important limitations," Tully told the WSJ.
More on AI literacy: Hypocrite Teachers Are Telling Students Not to Use AI While Using It to Grade Their Work
Share This Article