Amid Google's preparation to launch its own chatbot-integrated search feature — a major push to compete with Microsoft's ChatGPT-integrated Bing — the search giant has quietly issued some new warnings regarding publishers looking to run AI-generated content.
Specifically, Google is warning outlets that there'll be extra scrutiny from its search team on AI-generated content regarding "health, civic, or financial information." So, basically, areas where you really want to get things right.
"These issues exist in both human-generated and AI-generated content," reads the new Google FAQ, speaking specifically to "AI content that potentially propagates misinformation or contradicts consensus on important topics."
"However content is produced, our systems look to surface high-quality information from reliable sources, and not information that contradicts well-established consensus on important topics," it continues. "On topics where information quality is critically important — like health, civic, or financial information — our systems place an even greater emphasis on signals of reliability."
To Google's credit, it's a fair warning.
Its own involvement in the AI arms race aside, Google is the most-used search engine by a landslide. Generative AIs are already being used by major publishers to churn out content, while the Hustle Bro sect is already encouraging its following to use the free-to-the-public tool to build personal content mills. As one of the foremost curators of our digital lives, Google has to adapt to new technologies that change how online content is made — and generative AI, regardless of its very real flaws, is already doing just that.
That said, Google absolutely has a dog in this fight, because it is a dog in the fight. It seems pretty desperate to keep its head above the Microsoft and OpenAI-led AI market waters, and given that its own chatbot-infused search has already shown to be blatantly incorrect — in an advertisement, of all things — it's probably best for it to get ahead of the many problems that are likely to come in a digital landscape packed full of cheap, fast, extremely confident-sounding but often wrong AI content.
To that end, it's not surprising to see Google name healthcare and finance as content of particular concern, due in part to their general importance as well as to the very bleak reality that available generative AI tools consistently get those kinds of content wrong. Large Language Models (LLMs) are notoriously bad with numbers — illustrated well by CNET's embarrassingly error-ridden AI-generated financial advice — while doctors have found ChatGPT to straight-up fabricate medical diagnoses and even treatments, even including fake sources for its alleged findings.
And as far as political content goes, a lot of experts have warned that ChatGPT's availability is perfectly poised to turn our online world into propaganda hell. So, you know, cheers.
Ultimately, though, Google says that we shouldn't worry too much. After all, they've been doing this for a while.
"Our focus on the quality of content, rather than how content is produced, is a useful guide that has helped us deliver reliable, high quality results to users for years," reads the new FAQ.
Noted, but we're sure Google might forgive us for having our fair share of concern — especially considering that it's not making content creators mark material as AI-generated.
"AI or automation disclosures are useful for content where someone might think 'How was this created?'" they wrote. "Consider adding these when it would be reasonably expected."
More on Google AI: Google's Demo of Upcoming AI Shows It Making Huge Factual Mistake
Share This Article