A sticky situation.
Lost in the Sauce
Google Search now comes with fancy-schmancy "AI Overviews" — and predictably, they're already giving some pretty dumb answers. This latest one, though, really takes the cake — or shall we say pie.
As shared in a screenshot on X-formerly-Twitter, someone looked up "cheese not sticking to pizza" on Google. The search engine's AI Overview authoritatively began that "cheese can slide off pizza for a number of reasons."
"Here are some things you can try," it lectures. Its first suggestion is to mix in cheese into the sauce. Then, without so much as a warning, it recommends adding "about 1/8 cup of non-toxic glue to the sauce to give it more tackiness."
That, we can all agree, is absolutely horrible, if not dangerous, advice — whichever way you slice it.
https://t.co/W09ssjvOkJ pic.twitter.com/6ALCbz6EjK
— SG-r01 (@heavenrend) May 22, 2024
Works Cited
What's really illuminating, however, is the apparent source the AI drew on for its recommendation. According to internet sleuths, it's almost certainly a random Reddit comment from 11 years ago, posted by a user with the crass name of fucksmith, that was almost certainly meant as a joke. Read it for yourself, and you can see the undeniable similarities with the recommendation by the AI Overview.
Mr. Fucksmith was responding to a thread titled "My cheese slides off the pizza too easily." It's easy to see why that would make its content appear relevant to the query, but it's hard to make sense of why the AI algorithm singled out this comment in particular, which only had eight upvotes at the time, from a thread that's equally obscure.
Seems the origin of the Google AI’s conclusion was an 11 year old Reddit post by the eminent scholar, fucksmith. https://t.co/fG8i5ZlWtl pic.twitter.com/0ijXRqA16y
— Kurt Opsahl @kurt@mstdn.social (@kurtopsahl) May 23, 2024
Breach of Crust
That the AI can't discern sincerity from shitpost is no surprise. It underscores a fundamental flaw to the predominant approach of training generative AI models, which is in a nutshell to feed them as much information scraped from the internet as possible. That inevitably leads to a lot of garbage being ingested, and then eventually regurgitated. In this case, the faults of this approach have a clear lineage: in February, Google struck a $60 million deal with Reddit to train its AI on users' posts. Today, that AI is using Reddit posts to tell people to eat glue.
And again, another flaw: the cheesy contretemps betrays just how lackluster the capabilities of these AI models are. What is gained by getting search results from them that couldn't be answered by adding "Reddit" to a search query? At least in the latter's case, you'd be seeing where the information is coming from in its original context — and then you could decide if fucksmith is really the pizza expert you want to be taking advice from.
But if Google's AI Overview is just going to crib what people post on social media, it could at least do it correctly; the real answer was right there in the original Reddit thread it was copying from as the top comment.
"Too much sauce, bro," wrote a user with the handle rotzak, sagaciously.
After this story ran, a Google spokesperson provided the following statement: "The vast majority of AI Overviews provide high quality information, with links to dig deeper on the web. Many of the examples we've seen have been uncommon queries, and we've also seen examples that were doctored or that we couldn't reproduce. We conducted extensive testing before launching this new experience, and as with other features we've launched in Search, we appreciate the feedback. We're taking swift action where appropriate under our content policies, and using these examples to develop broader improvements to our systems, some of which have already started to roll out."
More on AI: Google Is Already Jamming Advertisements Into Its Crappy AI
Share This Article