Gizmodo owner G/O Media's first foray into AI-generated content was messy, avoidable, and insulting to readers and employees alike.
And yet, the AI-generated article in question ultimately did what it was seemingly designed to do: rank in search results.
In the aftermath of the mess it took to get here, that finding should concern everyone in the media industry. It's an early glimpse of a dystopian future in which AI models generate content for the sake of other bots, at the expense of any humans caught up in the fallout.
To back up a bit: at the end of June, G/O Media — which owns Gizmodo, Kotaku, The Onion, and Quartz, among others — announced that it would begin to publish AI-generated content across their many publications as part of a "modest test."
"It shouldn't be a surprise that we've done a significant amount of thinking about Artificial Intelligence, just as everyone in the media business has been doing of late," Merrill Brown, the media group's editorial director, wrote in an email to employees. "We're convinced here that the changes AI will bring to the media and journalism worlds will be very meaningful, if difficult to predict with certainty, in 2023."
Unsurprisingly, employees across G/O Media-owned publications were furious in response to the news. And as it turned out, with good reason. In the very first AI-generated article Gizmodo published last week, the website's "Gizmodo Bot" completely missed the mark. The post, a so-called "Chronological List of Star Wars Movies & TV Shows," was terribly written and riddled with factual mistakes. Before it was corrected, the list failed to put the movies and shows into their correct chronological order and excluded some more recent chapters and spinoffs of the beloved franchise.
It was, to put it mildly, a terrible look for the company — and in the aftermath, the humans at Gizmodo pushed back even harder, with deputy editor James Whitbrook calling the listicle "embarrassing, unpublishable, disrespectful."
Which brings us to today. Despite all the mistakes and lack of staff consent, it appears that Gizmodo's bot-generated article has still managed to rank and appear on the first page of Google results when you search for the words "Star Wars movies."
And while some of the listicle's most egregious errors have been corrected, it's still exposing readers to incorrect information that a human "Star Wars" aficionado most likely would have caught.
For starters, it contains a glaring typo, referring to "The Clone Wars" as "An nimated [sic] film."
A more serious issue is that every installment the AI listed is canon — meaning that they comprise the authoritative lore of the "Star Wars" franchise — except one. The final entry on the list, "Star Wars: Visions," is not canonical to the "Star Wars" universe at all; instead, by its explicit framing, it's a reinterpretation of the franchise's mythology that doesn't conform to established canon. In other words, if the AI was going to include the non-canon "Visions," it might as well include the non-canon 1980s "Star Wars: Droids" animated series, or the notorious 1978 "Star Wars Holiday Special." The AI provided no explanation for the discrepancy.
Worse yet, since "Visions" is an anthology, it includes episodes peppered throughout the "Star Wars" timeline, meaning that the AI's claim that it's the final chronological "Star Wars" installment is woefully wrong.
An equally galling error: the AI lists the 2008 "The Clone Wars" movie as taking place after "The Clone Wars" TV series. That's flatly incorrect; according to Disney's official chronology, the movie takes place several episodes into the series.
Are these nitty-gritty complaints? Sure, but the purpose of the listicle is to provide accurate information about the "Star Wars" franchise, and Gizmodo's talented staff has built a decades-long reputation for caring about getting facts right about nerdy topics. A human expert, with an investment in the fandom, would have been able to tackle the topic in a far more nuanced way, providing readers with precise and well-contextualized info.
G-O didn't respond to questions about the lingering inaccuracies in the bot-generated article.
Besides being badly written, it's clear that the article was never really intended for human readers. Instead, the obvious ploy is to fool search algorithms into ranking it highly — and given its Google Search ranking, it's clearly fulfilled that objective. In many ways, it's a proof-of-concept for a depressing system in which bots write primarily for bots, and the role of humans, whether they're writers, editors, or readers, is increasingly diminished in the process.
A Google spokesperson provided a statement that didn't directly address the error-ridden article's prominence in the company's search results.
"Our goal for Search is to show helpful, relevant content that’s created for people, rather than for ranking highly on search engines," the statement read. "And for 20 years, Search has adapted to address new spam techniques and low quality content, including mass-produced content in various forms. Content created with the aim of gaming search ranking — regardless of if it's produced by humans or AI — is still spam and will be treated as such. Thanks to our advanced spam fighting systems, we’re able to keep Search 99 percent spam free. And we continue to launch improvements to reduce low quality, unoriginal content designed to attract clicks."
Stakeholders, naturally, see things differently.
"AI content will not replace my work-but it will devalue it, place undue burden on editors, destroy the credibility of my outlet, and further frustrate our audience," Lin Codega, a Gizmodo journalist, tweeted when the AI initiative was first announced. "AI in any form, only undermines our mission, demoralizes our reporters, and degrades our audience's trust."
This low-effort, AI-generated fluff still doesn't bode well for either the internet or the media industry's future. We've yet to see an example of a generative AI initiative in the journalism industry that hasn't been disastrous in one way or another. In the clamor to cash in on the trend, readers are being spoonfed incorrect, plagiarized, or otherwise uninspired content, while writers and editors are forced to chase error after error plaguing bot-produced stories.
But that hasn't stopped media executives from trying it anyway, seemingly entranced by the gleam of cheap, scalable, and SEO-friendly content that doesn't rely on pesky humans. And Google is very much complicit, rewarding these efforts by allowing poorly-researched, AI-generated content to rank highly.
Which leaves us with a critical question: what does a future of the internet look like when bots are primarily designed to serve other bots with their content? It's a vision that doesn't inspire much confidence — and this certainly feels like an "at what cost" moment.
Updated to correctly identify Merrill Brown's job title.
More on the listicle from SEO hell: Io9 Staff Horrified as Site Publishes Error-Filled AI Generated Garbage
Share This Article