OpenAI's ChatGPT is flooding the internet with a tsunami of made-up facts and disinformation — and that's rapidly becoming a very real problem for the journalism industry.

Reporters at The Guardian noticed that the AI chatbot had made up entire articles and bylines that it never actually published, a worrying side effect of democratizing tech that can't reliably distinguish truth from fiction.

Worse yet, letting these chatbots "hallucinate" — itself now a disputed euphemism — sources could serve to undermine legitimate news sources.

"Huge amounts have been written about generative AI’s tendency to manufacture facts and events," The Guardian's head of editorial innovation Chris Moran wrote. "But this specific wrinkle — the invention of sources — is particularly troubling for trusted news organizations and journalists whose inclusion adds legitimacy and weight to a persuasively written fantasy."

"And for readers and the wider information ecosystem, it opens up whole new questions about whether citations can be trusted in any way," he added, "and could well feed conspiracy theories about the mysterious removal of articles on sensitive issues that never existed in the first place."

It's not just journalists at The Guardian. Many other writers have found that their names were attached to sources that ChatGPT had drawn out of thin air.

Kate Crawford, an AI researcher and author of "Atlas of AI," was contacted by an Insider journalist who had been told by ChatGPT that Crawford was one of the top critics of podcaster Lex Fridman. The AI tool offered up a number of links and citations linking Crawford to Fridman — which were entirely fabricated, according to the author.

It even goes beyond simple fake citations. Last month, journalists at USA Today were shocked to find that ChatGPT had come up with citations for entire research studies about how access to guns doesn't raise the risk of child mortality.

While journalists are beginning to ring the alarm bells over a surge in made-up reporting, other publications see AI as a big opportunity. While Moran said that The Guardian isn't ready to make use of generative AI in the newsroom any time soon, other publications have already raced ahead and published entire articles generated by AI, including CNET and BuzzFeed — many of which were later found to contain factual inaccuracies and plagiarized passages.

In short, with tools like ChatGPT in the hands of practically anybody with an internet connection, we're likely to see a lot more journalists having their names attached to completely made-up sources, a troubling side-effect of tech that has an unnerving tendency to falsify sourcing.

It's a compounding issue, with newsrooms across the country laying off their staff while making investments in AI. Even before the advent of AI, journalists have been the victims of the likes of Elon Musk accusing them of spreading fake news.

Worst of all, there isn't a clear answer as to who's to blame. Is it OpenAI for allowing its tool to dream up citations unfettered? Or is it their human users, who are using that information to make a point?

It's a dire predicament with no easy ways out.

More on ChatGPT: Italy’s Deputy Prime Minister Furious His Regulators Banned ChatGPT


Share This Article