Trust Issues

Gannett Promised to Be Super Responsible With AI Before Completely Bungling It

They promised that no AI content would be published in its papers without human oversight.
Maggie Harrison Dupré Avatar
Gannett promised that no AI content would be published to its vast network without direct human oversight. This isn't how things played out.
AUSTIN, TEXAS - JUNE 05: Journalists protest outside the offices of the Austin American Statesman newspaper on June 05, 2023 in Austin, Texas. Staff journalists at the paper joined hundreds of staffers across the country in a nationwide walkout, protesting against Gannett newspaper's job cuts, budget cuts and the company's leadership, headed by CEO Mike Reed. (Photo by Brandon Bell/Getty Images) Image: Brandon Bell via Getty Images

Just a few short months ago, in June, executives at the publishing giant Gannett — which owns USA Today, in addition to hundreds of local newspapers — swore that they would use AI safely and responsibly.

“The desire to go fast was a mistake for some of the other news services,” Renn Turiano, senior vice president and head of product at Gannett, told Reuters at the time. “We’re not making that mistake.”

Turino, per Reuters, further argued that automation would mostly just streamline workflows and for its human journalists, freeing up their time and alleviating busy work (a common refrain among the AI positive, in the media industry and beyond.)

Perhaps most notably, Turino reportedly promised that humans would always be included in the publisher’s AI processes, and that no AI-generated content would be published “automatically, without oversight,” according to the June report.

It’s a reasonable-enough attitude toward the use of AI, and if Gannett had actually followed through with it, it may well have set a strong example for the rest of the industry. Gannett’s optimistic promises, however, couldn’t be more broken.

To recap: last week, it was discovered that the news behemoth had been quietly publishing AI-generated high school sports articles in multiple of its local papers as well as USA Today. And this content, generated by a company called Lede AI, was terrible. Each AI-spun blurb was awkward and repetitive, with no mention of details like player names — and occasionally even displaying outright formatting gore.

Most importantly, as Gannett’s rush to retroactively edit the synthetic snippets makes all the clearer, there appears to have been little to no human involvement in the drafting or publishing of the AI-generated material.

In other words, the reality of Gannett’s AI efforts couldn’t be further from the responsible, human-intensive AI vision that the publisher laid out to Reuters in June — and even cemented in its AI ethics policy, which has similarly been turned upside-down by Gannett’s ill-informed choice to auto-publish synthetic content to its public websites.

To add an extra sprinkle of irony, an expert contributing to the June report even expressed his concerns about exactly what ended up happening at Gannett and USA Today.

“Where I am right now,” that expert, Northwestern University associate professor of Communications and Computer Science Nicholas Diakopoulos, told Reuters at the time, “is I wouldn’t recommend these models for any journalistic use case where you’re publishing automatically to a public channel.”

And as the publisher doesn’t appear to be planning to pull the AI plug? Better luck to them next time — though maybe they should abstain from making any promises they can’t keep.

More on Gannett AI: Gannett Sports Writer on Botched AI-Generated Sports Articles: “Embarrassing”

Maggie Harrison Dupré Avatar

Maggie Harrison Dupré

Senior Staff Writer

I’m a senior staff writer at Futurism, investigating how the rise of artificial intelligence is impacting the media, internet, and information ecosystems.