Last week, we reported that the prominent technology news site CNET had been quietly publishing articles generated by an unspecified "AI engine."
The news sparked outrage. Critics pointed out that the experiment felt like an attempt to eliminate work for entry-level writers, and that the accuracy of current-generation AI text generators is notoriously poor. The fact that CNET never publicly announced the program, and that the disclosure that the posts were bot-written was hidden away behind a human-sounding byline — "CNET Money Staff" — made it feel as though the outlet was trying to camouflage the provocative initiative from scrutiny.
After the outcry, CNET editor-in-chief Connie Guglielmo acknowledged the AI-written articles in a post that celebrated CNET's reputation for "being transparent."
Without acknowledging the criticism, Guglielmo wrote that the publication was changing the byline on its AI-generated articles from "CNET Money Staff" to simply "CNET Money," as well as making the disclosure more prominent.
Furthermore, she promised, every story published under the program had been "reviewed, fact-checked and edited by an editor with topical expertise before we hit publish."
That may well be the case. But we couldn't help but notice that one of the very same AI-generated articles that Guglielmo highlighted in her post makes a series of boneheaded errors that drag the concept of replacing human writers with AI down to earth.
Take this section in the article, which is a basic explainer about compound interest (emphasis ours):
"To calculate compound interest, use the following formula:
Initial balance (1+ interest rate / number of compounding periods) ^ number of compoundings per period x number of periods
For example, if you deposit $10,000 into a savings account that earns 3% interest compounding annually, you'll earn $10,300 at the end of the first year."
It sounds authoritative, but it's wrong. In reality, of course, the person the AI is describing would earn only $300 over the first year. It's true that the total value of their principal plus their interest would total $10,300, but that's very different from earnings — the principal is money that the investor had already accumulated prior to putting it in an interest-bearing account.
"It is simply not correct, or common practice, to say that you have 'earned' both the principal sum and the interest," Michael Dowling, an associate dean and professor of finance at Dublin College University Business School, told us of the AI-generated article.
It's a dumb error, and one that many financially literate people would have the common sense not to take at face value. But then again, the article is written at a level so basic that it would only really be of interest to those with extremely low information about personal finance in the first place, so it seems to run the risk of providing wildly unrealistic expectations — claiming you could earn $10,300 in a year on a $10,000 investment — to the exact readers who don't know enough to be skeptical.
Another error in the article involves the AI's description of how loans work. Here's what it wrote (again, emphasis ours):
"With, and , interest is usually calculated in simple terms.
For example, if you take out a car loan for $25,000, and your interest rate is 4%, you'll pay a flat $1,000 in interest per year."
Again, the AI is writing with the panache of a knowledgeable financial advisor. But as a human expert would know, it's making another ignorant mistake.
What it's bungling this time is that the way mortgages and auto loans are typically structured, the borrower doesn't pay a flat amount of interest per year, or even per monthly payment. Instead, on each successive payment they owe interest only on the remaining balance. That means that toward the beginning of the loan, the borrower pays more interest and less principal, which gradually reverses as the payments continue.
It's easy to illustrate the error by entering the details from the CNET AI's hypothetical scenario — a $25,000 loan with an interest rate of 4 percent — into an auto loan amortization calculator. The result? Contrary to what the AI claimed, there's never a year when the borrower will pay a full $1,000 in interest, since they start chipping away at the balance on their first payment.
CNET's AI is "absolutely" wrong in how it described loan payments, Dowling said.
"That's just simply not the case that it would be $1,000 per year in interest," he said, "as the loan balance is being reduced every year and you only pay interest on the outstanding balance."
The problem with this description isn't just that it's wrong. It's that the AI is eliding an important reality about many loans: that if you pay them down faster, you end up paying less interest in the future. In other words, it's feeding terrible financial advice directly to people trying to improve their grasp of it.
The AI made yet another gaffe when it attempted to describe certificates of deposit, better known as CDs, which are financial products that offer interest, but typically discourage withdrawing the funds before a set period has elapsed (once more, emphasis ours):
"Note that aor may offer interest that compounds daily, weekly or monthly. But a only compounds once, after the initial deposit reaches maturity."
All three screwups, each of which the AI presented with the easy authority of an actual subject matter expert, highlight a core issue with current-generation AI text generators: while they're legitimately impressive at spitting out glib, true-sounding prose, they have a notoriously difficult time distinguishing fact from fiction.
For an editor, that's bound to pose an issue. It's one thing to work with a writer who does their best to produce accurate work, but another entirely if they pepper their drafts with casual mistakes and embellishments. BuzzFeed News perfectly illustrated that risk this week, when a reporter there used ChatGPT to generate a story about CNET's secretive use of AI — only to find that she "had to rewrite the prompt a few times to get it to stop inserting factual errors."
Another issue that may be at play here is well known in the separate AI-inflected field of self-driving cars. Researchers have found that human safety drivers, tasked with sitting behind the wheel of an autonomous vehicle to take over if it malfunctions, tend to quickly lose focus when they don't have to actively work the controls. The same dynamic may be at play when an editor is put in charge of approving a deluge of AI-generated explainers: in the face of endless synthetic writing, maybe it makes sense that human editors start to go on autopilot themselves.
Everyone makes mistakes, so we're certainly sympathetic. But in these early days of CNET's AI experiment — nevermind in a piece published the same day that the site's editor went public in response to a storm of criticism — you'd expect the editors tasked with monitoring the AI to be on their highest alert.
If these are the sorts of blunders that slip through during that period of peak scrutiny, what should we expect when there aren't so many eyes on the AI's work? And what about when copycats see that CNET is getting away with the practice and start filling the web with their own AI-generated content, with even fewer scruples?
It's also worth asking what readers actually want: financial advice from a real human with real financial concerns, or logorrhea from a bot that's been trained to rehash existing financial writing with no financial stake of its own.
Dowling said that while he's optimistic about the potential of AI in general, he suspects that an algorithm like CNET's lack of personal perspective or "insights that go beyond mere summary" will keep it from producing genuinely interesting work.
"People already approach finance reading with an advance sense of boredom and reluctance — will ChatGPT just embed those negative features even deeper in finance writing?" he asked.
After Futurism reached out to CNET about the errors, staff there issued a lengthy correction to the article and edited the text to address all three mistakes.
Staff at CNET also seemingly identified a fourth error by the AI, which they also described in the correction, regarding the distinction between Annual Percentage Rate (APR) and Annual Percentage Yield (APY).
A CNET spokesperson provided Futurism with a brief statement about the corrections.
"We are actively reviewing all our AI-assisted pieces to make sure no further inaccuracies made it through the editing process, as humans make mistakes, too," they said. "We will continue to issue any necessary corrections according to CNET's correction policy."
Are you a current or former CNET employee who wants to discuss the company's foray into AI-generated articles? Email email@example.com to share your perspective. It's okay if you don't want to be identified by name.
The spokesperson didn't respond to a question about CNET's confidence in the other articles the AI has published to its site. After we reached out, however, a new message appeared at the top of almost every piece the AI has published, dating back to November.
"Editors' note: We are currently reviewing this story for accuracy," reads the message. "If we find errors, we will update and issue corrections."
It's worth pointing out, as Platformer's Casey Newton did this week, that CNET's AI-generated finance articles arguably only exist in the first place because they're trying to manipulate Google's algorithm for profit. Countless better explanations of compound interest already exist; CNET's strategy is simply to publish large volumes of cheaply produced text, carefully optimized to float to the top of search results, in a bid to capture the monetizable eyeballs of the financially curious.
"Over time, we should expect more consumer websites to feature this kind of 'gray' material: good-enough AI writing, lightly reviewed (but not always) by human editors, will take over as much of digital publishing as readers will tolerate," Newton wrote. "The quiet spread of AI kudzu vines across CNET is a grim development for journalism, as more of the work once reserved for entry-level writers building their resumes is swiftly automated away."
In other words, it's not just AI that's the issue here. It's that AI is maturing at a moment when the journalism industry has already been hollowed out by a decades-long race to the bottom — a perfect storm for media bosses eager to cut funding for human writers.
Frank Landymore contributed reporting to this story.