We already knew that the tech news site CNET had been publishing AI-generated articles in near secrecy. Things got even more embarrassing for the site when Futurism discovered that the bot's articles were loaded with errors and plagiarism.

Now, according to new reporting from The Verge, the scandal has deepened considerably: leadership at CNET's parent company, Red Ventures, was fully aware that the deeply flawed AI had a habit of fabricating facts and plagiarizing others' work — and deployed it anyway.

"They were well aware of the fact that the AI plagiarized and hallucinated," a source who attended a meeting about the AI's substantial shortcomings at Red Ventures told The Verge.

"One of the things they were focused on when they developed the program was reducing plagiarism," the source added. "I suppose that didn’t work out so well."

That claim adds a dark new layer to the deepening storm cloud over CNET and the rest of Red Ventures' portfolio, which includes the finance sites Bankrate and CreditCards.com, as well as an armada of education and health sites including Healthline.

It'd be bad, of course, to roll out a busted AI that churned out SEO bait financial articles that needed corrections so extensive that more than half of them now carry an editor's note.

But the idea that Red Ventures knew that the AI was broken in advance, discussed the issue in staff meetings, and then chose to deploy it anyway? That's a whole new low, and a cautionary tale about how profit-greedy companies are likely to roll out unfinished AI tech in the media and far beyond.

CNET didn't respond to a request for comment about the new allegations.

Are you a current or former Red Ventures employee with information about the company's use of AI? Drop us a line at tips@futurism.com. We can keep you anonymous.

The revelation also sheds new perspective on CNET editor-in-chief Connie Guglielmo's strident defense of the bot after its errors and plagiarism were outed.

"Expect CNET to continue exploring and testing how AI can be used to help our teams as they go about their work testing, researching and crafting the unbiased advice and fact-based reporting we’re known for," Guglielmo wrote.

Guglielmo has not responded to the new allegations.

New allegations surfaced by The Verge suggest that CNET's fall from integrity has been more profound than previously known, even beyond its use of AI.

Multiple former employees told the outlet in its new reporting that there were multiple times when they were pressured to edit stories and reviews so that they would be more favorable to the company's advertisers.

"I understood a supervisor to imply in conversation that how I proceeded with my review could impact my chances of promotion in the future," one source told The Verge.

And anyone at the company who opposes CNET's pivot to wretchedly low quality AI and undisclosed advertiser influence of coverage? They're getting shown the door.

"It’s a culture that if you disagree with them, they’re going to get rid of you and replace you with a zealot," another former employee told The Verge. "Somebody that’s absolutely a true believer, [that] drinks the Kool-Aid."

More on CNET: Leaked Messages Show How CNET's Parent Company Really Sees AI-Generated Content


Share This Article