Many media executives are betting the future of the industry on artificial intelligence, going as far as replacing journalists in an effort to keep costs down and cash in on the hype.
The result of these efforts so far has left a lot to be desired. We've come across countless examples of publications inadvertently publishing garbled AI slop, infuriating readers and journalists alike.
AI's persistent hallucinations are already infecting large swathes of our online lives, from Google's hilariously terrible AI Overviews mangling trustworthy information to brainrot gambling content appearing in newspapers to entire AI slop farms that blatantly rip off real journalists' work.
Worse yet, Google's embrace of the tech is actively hurting the bottom lines of publications by keeping readers — and with them, much-needed membership and display ad revenue — away from the content their AI is monetizing.
Meanwhile, journalists themselves are finding out the hard way that AI is woefully inadequate at meaningfully helping them out in their day-to-day work.
As a team led by award-winning New York University journalism professor Hilke Schellmann found in a new investigation published by the Columbia Journalism Review, AI is strikingly terrible at summarizing documents and scientific research for busy reporters who might be tempted to rely on the tech.
Schellman and her colleagues created a new test to evaluate the "journalistic values of accuracy and truth," finding that while most currently available AI models, including Google's Gemini 2.5 Pro and OpenAI's GPT-4o — which is still avialable to paying customers following the release of GPT-5 after OpenAI scrapped plans to pull it down — successfully generated short summaries of meeting transcripts and minutes from local government meetings with "almost no hallucations."
However, the AIs systematically "underperformed against the human benchmark in generating accurate long summaries" of around 500 words, failing to include roughly half the facts included in the transcripts and minutes. Hallucinations were also a bigger issue in the long summaries than the short ones.
The tech's shortcomings were far more egregious when it came to conducting research on behalf of science reporters. The team tasked five top AI research tools with generating a list of related scientific papers for four academic papers, with results that ranged from "underwhelming" to "alarming."
"None of the tools produced literature reviews with significant overlap to the benchmark papers, except for one test with Semantic Scholar, where it matched about 50 percent of citations," Schellman wrote. "Across all four tests, most tools identified less than 6 percent of the same papers cited in the human-authored reviews, and often 0 percent."
Repeated tests showed that the AIs' scientific consensus shifted simply by running it through the same prompts again.
"A poorly sourced list of related papers isn’t just incomplete, it’s misleading," Schellman argued. "If a journalist relies on these tools to understand the context surrounding new research, they risk misunderstanding and misrepresenting scientific breakthroughs, omitting published critiques, and overlooking prior work that challenges the findings."
In short, the investigation shows that despite AI companies promising that their tech can be used to reduce the workload of overworked journalists, their tools fail at rote tasks like summarization and scientific research.
That means it's also up to journalists to perform a "final fact-check," Schellmann argued.
But that's the paradox at the heart of so much contemporary AI: how useful is a tool if you have to double-check everything it does? It's possible that using these tools may end up adding to the workloads of journalists.
As the internet continues to be polluted by generative AI slop that more often than not eludes virtually any form of human fact-checking or revision, the future of journalism is at stake.
And that's not just sensationalism or AI doomerism talking. The journalism industry is facing an existential threat as newsrooms are roiled by sweeping layoffs. Media companies, meanwhile, are doubling down on the promises of AI, capturing shareholder enthusiasm by striking up million-dollar licensing deals with the likes of OpenAI.
It's been just over a year and a half since Futurism ran a story about Sports Illustrated publishing AI slop, bylined by AI-generated authors masquerading as humans.
Considering where the industry is at today, seemingly little has changed since then. And that's despite widespread disillusionment and a tangible anti-AI turn in public sentiment.
Last year, Axel Springer, the German parent company of Politico and the largest publisher in Europe, forced journalists at the publication to publish AI slop, triggering outrage. And The Washington Post is woring on an AI tool to let underqualified writers publish content in its storied pages.
Even the stalwart publisher of scientific journals Springer Nature is now offering to sell published authors AI-generated "Media Kits" that summarize their own research.
Consumers of media are also suspicious of the tech. Last year, a study found that when AI contribution was mentioned in the byline, readers' perceptions of the source and author credibility fell significantly.
More on AI journalism: AI Is Slitting the Throat of the Journalism Industry
Share This Article