Google's AI search, which swallows up web results and delivers them to users in a regurgitated package, delivers each of its AI-paraphrased answers to user queries in a concise, coolly confident tone. Just one tiny problem: it's wrong. A lot.

Over the past week, X-formerly-Twitter has lit ablaze with screenshots of Google's "AI Overview" spewing a seemingly unending torrent of inaccurate, though confidently stated, answers to user queries. They range from hilarious to bizarre to even downright harmful, rewriting history and offering extremely bad pizza advice along the way. Reading through the AI-spun results, it's enough to make you wonder: what, exactly, is Google spending billions of dollars for here?

One absolute banger of a Google AI-promoted health tip is the recommendation that "eating ass can boost your immune system" — this was in response to the query "health benefits of eating ass" — a claim that Google's AI says is supported by a 2018 study by the University of California, Santa Barbara.

A healthy sex life is known to contribute to an individual's overall immune health, so for those who frequently partake in anilingus, it could possibly be argued that the sex act contributes in part to their overall well-being. Google's source, however, isn't a science-backed analysis of sex and its relationship to immune health; instead, it's a clearly satirical article published by writers at UC Santa Barbara's student paper The Daily Nexus, in a comedic section of the paper titled "The Daily Stench."

There's no research to back up the Google AI's added claim that "people who eat ass are 33 percent less likely to catch airborne illnesses," although the line that really gives away the robot's lack of judgment is the note that "people with higher levels of truffle butter" — sexual slang that you may look up yourself, if you absolutely must — "in their saliva have stronger immunity" to common airborne ailments.

In sum? This is all gross comedic nonsense, but Google's AI didn't get the joke.

Other more humorous Overview mistakes include various ill-advised cooking claims, like the suggestion that gasoline can be used to make a "spicy spaghetti dish," or the widely-shared proposition that pizza chefs might consider mixing glue — yes, glue — into their sauce to ensure that the pizza cheese doesn't slip off.

As netizens quickly pointed out, the latter claim appears to be sourced from an 11-year-old Reddit comment posted by a user named "Fucksmith." So far, Google's AI is getting a negative score for media literacy.

Elsewhere, the AI was seen spouting nonsensical math equations designed to help users figure out exactly how many sisters they already have (the AI's final answer: 680 sisters, because this person must be the child of Genghis Khan.) It was also telling users to eat "at least one small rock a day," and still doesn't think that any countries in Africa start with the letter "K."

Many of the responses weren't quite so funny. Take this AI Overview, flagged by Nieman Lab's Sarah Scire. When asked how many Muslim presidents have led the US, Google's AI told Scire, falsely, that "Barack Obama is the first Muslim President." The AI also went on to say that current President Joe Biden has "no formal religious affiliation." Neither of these statements are true. Obama is a self-described Christian and attended Protestant churches; Biden is Catholic.

The AI's bungling of Obama's religion goes beyond incorrect, though. The statement reflects the beliefs of the "Birtherism" movement, or the baseless and racism-driven conspiracy theory that Obama wasn't born in the US (he was) and that he's secretly a Muslim (he isn't.) Google's AI, then, is not only spitting out wrong information, but feeding damaging fringe conspiracy beliefs at the same time.


A Google spokesperson told The Verge that the AI errors being shared were made in "generally very uncommon queries, and aren't representative of most people's experiences." We'd argue, though, that a search being "uncommon" isn't an excuse for an AI tool to give bad advice, especially in cases where a bad answer could cause real damage.

To that end, it could also be argued that AI Overview makes Google liable for possible harms in ways that it may not have been liable in the past. By introducing its text-regurgitating tool, it's no longer just a content aggregator or host, but is also a content creator. Spouting conspiracy theories, or offering glue-laced recipes, has a different set of consequences when you're presenting it as paraphrased fact, and not just as a blue link to a bad source on a results page.

It's incredible, really. Google controls over 80 percent of total search marketshare, making it the go-to gateway to the internet for the vast majority of global internet users. Search is also an incredibly lucrative product, printing the billions that Google is now pouring into its AI endeavors. And now, because of those costly AI efforts, Google's golden goose is imploding.

Now Google finds itself between a rock and a hard place. Halting its AI search experiment would be a terrible look for management; at the same time, it's unclear if Google can even fix the problems that clearly plague what's an objectively bad product. In the meantime, the internet continues to crumble.

Dumpster fires everywhere. Anyway! Anyone in the mood for pizza?

More on Google AI: Google Is Already Jamming Advertisements into Its Crappy AI

Share This Article