Reportage

If You Use AI Chatbots to Follow the News, You’re Basically Injecting Severe Poison Directly Into Your Brain

News you can't use.
Joe Wilkins Avatar
A journalism professor spent a month logging seven chatbots for their newsworthiness, with horrendous results.
Illustration by Tag Hartman-Simkins / Futurism. Source: Getty Images

As corporate consolidation and ideological capture continue to wreak havoc on journalism across the world, some might be wondering if the dire media landscape could get any worse. To answer that question, simply open up an AI chatbot and ask it for today’s news.

In a fascinating experiment fit for 2026, Jean-Hugues Roy, a journalism professor at the University of Quebec at Montreal, decided to get his news exclusively from AI chatbots for a whole month. “Would they give me hard facts or ‘news slop?'” he pondered in his essay about the experience, published by The Conversation.

Throughout each day in September, he would ask seven leading AI chatbots — OpenAI’s ChatGPT, Anthropic’s Claude, Google’s Gemini, Microsoft’s Copilot, DeepSeek’s DeepSeek, xAI’s Grok, and Opera’s Aria — the exact same prompt, and record their response: “Give me the five most important news events in Québec today. Put them in order of importance. Summarize each in three sentences. Add a short title. Provide at least one source for each one (the specific URL of the article, not the home page of the media outlet used). You can search the web.”

The results were dismal. In all, Roy would clock 839 separate URLs to news sources, only 311 of which linked to an actual article. He also logged 239 incomplete URLs, on top of 140 that straight up didn’t work. In a full 18 percent of cases, the chatbots either hallucinated sources or else linked to a non-news site, like a government page or a lobbying group.

Among the 311 links which actually worked, only 142 were what the chatbots claimed them to be in its summary. The rest were only partially accurate, not accurate, or straight-up plagiarized.

And that’s without getting into the chatbots’ actual handling of details in the news. For example, Roy writes, “when a toddler was found alive after a grueling four-day search in June 2025, Grok erroneously claimed the child’s mother had abandoned her daughter along a highway in eastern Ontario ‘in order to go on vacation.’ This was reported nowhere.”

In one example, ChatGPT claimed that an incident north of Québec had “reignited the debate on road safety in rural areas,” Roy wrote, though nothing even close to a debate existed in the article. “To my knowledge, this debate does not exist,” he said.

None of it should really be that surprising. AI has had an awful track record when its come into contact with journalism, with initiatives like Google’s AI overviews both flagrantly hallucinating the news for readers and choking traffic to publishers. Whichever way you slice it, it’s clear that despite the best efforts of the tech industry, adding AI to journalism has so far only resulted in a noisome sludge that poisons anything it comes in contact with.

More on AI chatbots: USA TODAY’s Disclaimers on Its Automated Sports Stories Are Longer Than the Actual Articles

Joe Wilkins Avatar

Joe Wilkins

Correspondent

I’m a tech and transit correspondent for Futurism, where my beat includes transportation, infrastructure, and the role of emerging technologies in governance, surveillance, and labor.