The odds that the New York Times and other major news outlets have published AI-generated articles — whether knowing it or not — seem very high indeed.
Speculation abounded on this possibility earlier this week, centering on a “Modern Love” column published in the NYT last November. It was sparked when on X, Becky Tuch of Lit Mag News posted an excerpt of the piece with her controversial take: “this reads EXACTLY like AI slop,” she wrote.
Turns out there’s evidence that Tuch was onto something, a new piece in The Atlantic reveals.
The writer of the column, Kate Gilgan, told magazine that she hadn’t copy and pasted language from an AI model, but “did utilize AI as a tool,” including chatbots like ChatGPT, Claude, and Gemini, for seeking “inspiration and guidance and correction.”
“I used AI as a collaborative editor and not as a content generator,” Gilgan insisted.
At this point in the AI boom, when we know that AI’s effects on its users are often much farther-reaching than they realize, this feels like a thin distinction to make. In the process of constantly consulting a chatbot, it seems inevitable that its style and form could rub off on you.
And the scale may be imposing. Controversies like those surrounding Gilgan’s column inspired several AI researchers to go back and see how much AI material has crept into American newspapers.
Using an AI-detection tool from the startup Pangram Labs, their findings, published as a preprint study in October, should raise alarm. They suggest that nine percent of newly-published articles are either partially or fully AI-generated, mostly in smaller, local outlets.
But when they focused on opinion pieces in “newspapers of records” including, the New York Times, the Wall Street Journal, and the Washington Post, they found that these were over six times more likely to contain AI-generated content than articles that came out of their newsroom.
Now, a disclaimer: many AI detectors, especially free ones, are notoriously unreliable. (A screenshot of an AI detector flagging a passage from Mary Shelley’s “Frankenstein” as “100 percent AI generated” recently went viral and generated heaps of mockery.) False accusations are happening all the time. But, for what it’s worth, Pangram tends to be held up as among the most reliable out there, a sentiment borne out in head-to-head tests.
Moreover, it’s noteworthy that the AI detector singled out opinion pieces as being AI generated, rather than news articles. These are often penned by writers who aren’t professional journalists and don’t work within the organization, meaning there’s less oversight on how they’re being written. In other words, it makes sense that AI content would turn up in opinion sections — which are often used to platform all kinds of claims in need of a heavy reality check — lending the AI detector some credibility. No one can deny that loads of AI-generated dreck is polluting scientific journals, so why should news outlets be spared?
This comes as many news organizations become uncomfortably tangled with AI companies. The Washington Post launched an AI-generated podcast feature that creates summaries of the paper’s latest stories, along with a chatbot that fields reader questions. The New York Times uses AI to generate headlines. Bloomberg provides AI-generated summaries of its articles. A senior manager at the Associated Press recently told staffers that “resistance” to AI was “futile.”
Letting these tools anywhere near newsrooms, though, could be a slippery slope. Last month, a senior Ars Technica reporter was caught accidentally using AI-fabricated quotes in an article, forcing the publication to issue a retraction. The reporter claimed he didn’t use AI to write the article itself, but he used a chatbot to summarize his notes, as a result of which he accidentally included a quote the AI hallucinated. He was terminated after an investigation.
More on AI: Novel Pulled From Shelves After Author Is Accused of Using AI