AI is now the dominant source of image-based misinformation on the internet, a team of Google researchers determined in a recent paper. While the findings have yet to be peer-reviewed, they're as fascinating as they are alarming — and strike at the heart of one of the deepest tensions at the center of Silicon Valley's ongoing AI race.

"The prevalence and harms of online misinformation is a perennial concern for internet platforms, institutions and society at large," reads the paper. "The rise of generative AI-based tools, which provide widely-accessible methods for synthesizing realistic audio, images, video and human-like text, have amplified these concerns."

The study, first caught by former Googler Alexios Mantzarlis and flagged in the newsletter Faked Up, focused on media-based misinformation, or bad information propagated through visual mediums like images and videos. To narrow the scope of the research, the study focused on media that was fact-checked by the service ClaimReview, ultimately examining a total of 135,838 fact-check-tagged pieces of online media.

As the researchers write in the paper, AI is effective for producing realistic synthetic content quickly and easily, at "a scale previously impossible without an enormous amount of manual labor." The availability of AI tools, per the researchers' findings, has led to hockey stick-like growth in AI-generated media online since 2023. Meanwhile, other types of content manipulation decreased in popularity, though "the rise" of AI media "did not produce a bump in the overall proportion" of image-dependant misinformation claims.

Reading between the lines, these results suggest that AI has become misinformation actors' favorite medium.

AI-spun content now makes up roughly 80 percent of visual misinformation, according to the study. What's more, as 404 Media reports, this is likely an undercount. The web is vast, and fact-checking services like ClaimReview are imperfect and often require opt-ins. This recent study also didn't examine media that included a partial use of AI, an example of which would be a campaign ad created by the team behind Florida Governor and normal-standing guy Ron Desantis' short-lived presidential bid that included fake AI-generated images of former president Donald Trump smooching Anthony Fauci.

"Fact checker capacity isn't completely elastic, and we can't assume it will necessarily scale with overall misinfo volume," Google's Nick Dufour, the lead author on the paper, told 404, "nor that there aren't novelty/prominence effects in choosing what to fact check."

On its own, these findings are striking. That they're coming from mostly Google researchers themselves, though, adds an extra layer of salience.

Google is one of the biggest players in Silicon Valley's ongoing AI race and is actively working to build text- and image-generating AI models. (It's even trying to infuse AI into its core product, search, although that effort isn't going well.)

At the same time, AI misinformation is proliferating throughout the internet, eroding Google's search results and, in general, making the open web an even more difficult landscape to navigate.

In short, regarding AI, Google is between plenty of rocks and a hard place. And given that the company's overwhelming market share effectively renders it the feudal ruler of the web, this messy impasse impacts everyone trying to find quality information online.

It's true that most tools, including conventional media editing and creation tools like Photoshop, can be abused for harm. But as the researchers emphasize, ease and scale both matter. Generative AI tools have replaced a bespoke creation process with Shein-level mass production, and a growing body of research shows that this is presenting a real problem for Google and other managers of the internet.

As always, don't believe everything you read, or see, online. The media world is already fractured, and the line between real and fake is continuing to blur. As Mantzarlis wrote in Faked Up: "A picture is worth 1,000 lies."

More on AI and misinformation: The Reason That Google's AI Suggests Using Glue on Pizza Shows a Deep Flaw with Tech Companies' AI Obsession


Share This Article