Would you trust an article written by a bot?

Trust Issues

With levels of distrust towards the news reaching new heights, some publications have begun experimenting with publishing artificial intelligence-generated content — which has been an unmitigated disaster in many instances.

And as it turns out, readers are becoming increasingly wary of the trend, which could only serve to erode their trust even further.

According to a new preprint study by researchers from the University of Oxford and the University of Minnesota, readers want news media to disclose if the article was AI-generated. But they also tend to trust news organizations less if they use AI-generated articles unless they list other articles that have served as sources for the AI-generated content.

"As news organizations increasingly look toward adopting AI technologies in their newsrooms," the researchers write, "our results hold implications for how disclosures about these techniques may contribute to or further undermine audience confidence in the institution of journalism at a time in which its standing with the public is especially tenuous."

Full Disclosure

For their study, the researchers surveyed 1,483 people English speakers located in the United States and presented them with a batch of political news articles that were AI-generated. Some were labeled as created by AI and some were not. Others were labeled as AI and contained a list of news articles that served as sources.

The researchers then asked the readers to rate the trustworthiness of news organizations by looking at the articles. The researchers found that readers rated content from news organizations that published articles labeled as AI-generated lower on an 11-point trust scale compared to news organizations that had articles with no disclosure.

Interestingly, articles that were labeled as being AI-generated weren't deemed by participants as being "less accurate or more biased," according to the paper. This tracks with the results of the appended survey participants also filled out: more than 80 percent of them want news organizations to label if content was AI-generated.

The researchers also noted some important limitations of their study, including pre-existing partisan divides and the associated variation in the amount of trust in the media. People may have also been put off by the lack of real-world associations of the mock news organizations named in the study.

It's a heavily nuanced topic that highlights the need for further research as well as more disclosure and a thorough vetting of generated content by news orgs.

"I don’t think all audiences will inevitably see all uses of these technologies in newsrooms as a net negative," coauthor and University of Minnesota researcher Benjamin Toff told Nieman Lab, "and I am especially interested in whether there are ways of describing these applications that may actually be greeted positively as a reason to be more trusting rather than less."

More on AI content: Sports Illustrated Union Says It’s "Horrified" by Publication of AI-Generated Writers

Share This Article