The Los Angeles Times is now shoving artificial intelligence into its opinion articles — and it already seems to be backfiring.
Earlier this week, the newspaper's billionaire owner Patrick Soon-Shiong announced that the LA Times would be "releasing new features to enhance and improve our digital product," including "insights" on opinion pieces that offer a "wide range of different AI-enabled perspectives."
In other words, AI will generate counter-arguments to opinion pieces penned by the newspaper's human experts, with no input from the paper's journalists.
Shortly thereafter, reporters noticed that an excellent article about Anaheim residents kicking the Ku Klux Klan out of town 100 years prior had been amended to include some dodgy AI-generated addendums.
Generated with the help of Perplexity — an AI startup that announced its partnership with the LA Times in December, and which is being sued by the New York Times and other outlets for alleged copyright infringement — the since-deleted "insights" seemed to downplay the seriousness of the loathsome KKK.
In one of the bullet points, the AI noted that some people in the region think there's no "formal proof" that the KKK was all that bad. In another, it pointed out that the Klan was considered part of "white Protestant culture" in 1920s Orange County.
Are those claims exactly wrong? Not really, the author of the original op-ed conceded, but they're giving oxygen — with no rebuttal — to talking points that soften up the image of a notorious racist terrorist group.
"The AI 'well, actually'-ed the KKK," NYT tech reporter Ryan Mac wrote in a Bluesky post that included screenshots of the contextual addendums, which appear to have been removed after the backlash.
The newspaper's communications team has not responded to Futurism's email asking why such content was added to the article or why it was deleted.
In response to news that the paper was introducing these generative rebuttals, LA Times Guild vice-chair Matt Hamilton said that the paper's editors don't review the AI-generated additions to opinion pieces before they go live.
"We don’t think this approach — AI-generated analysis unvetted by editorial staff — will do much to enhance trust in the media," the union leader said in a statement provided to The Hollywood Reporter and other outlets. "Quite the contrary, this tool risks further eroding confidence in the news."
More on AI and journalism: ChatGPT and Other Chatbots Are Hurting Publishers Even Worse Than We Thought
Share This Article