When prominent tech news site CNET was caught last month using AI to quietly publish dozens of articles, it produced widespread alarm. News readers learned in real time that the explosive new capabilities of software like OpenAI's GPT-3 meant they could no longer trust CNET’s journalism to be produced by a human. It didn't help when we discovered that the AI-generated articles were riddled with errors and substantially plagiarized, with CNET eventually issuing corrections on more than half the bot’s published pieces.
Now, leaked internal messages reveal that the negative headlines also kicked off deep concern inside CNET's parent company, Red Ventures.
But the consternation wasn’t about the ethics of providing readers with shoddy AI-generated misinformation. Instead, directors at the company expressed a profoundly cynical anxiety: that Google would notice the dismal quality of the AI’s work — and cut off the precious supply of search results that Red Ventures depends on for revenue.
The company's director of search engine optimization, Jake Gronsky, became particularly unnerved when Google acknowledged the story and issued an official response to it.
"CNET and Bankrate gained a ton of unnecessary attention for their AI Content disclosure to the point that Google issued a statement," he fretted in Slack messages reviewed by Futurism, adding ominously that Google "never does that."
"Residual impact could be on more harsh Google Systems updates," he wrote. "I wouldn't be surprised if G [Google] comes down hard on any sense of irregular content."
He was right to be concerned. As The Verge reported in the wake of the AI revelations, Red Ventures has transformed the once-venerable CNET into an "AI-powered SEO money machine" that depends largely on a constant supply of clicks, supplied by Google searches.
Basically, the company identifies questions about personal finance that people with low financial literacy are likely to search the web for — one recent headline by the CNET AI: "What Is a Credit Card?" — and then churns out articles on those topics designed to capture "high intent" potential customers, who they push toward affiliate links for new credit cards and loans. If a reader signs up for one of these cards or loans, Red Ventures gets a sizable kickback, often pocketing hundreds of dollars for a single customer referral.
"From what I can see, they're not particularly concerned about quality, as you can see from CNET," a former Red Ventures employee told Futurism of the AI. "That's not something the old CNET would have done. This is a post-acquisition situation, with fewer employees expected to produce more content and make the company more money."
The scheme is lucrative at scale, with the company's CEO Ric Elias having already become a billionaire by 2021. The operation has a major weak point, though: Google could knock down the entire house of cards at any moment by deciding to drive the people searching those queries toward better-produced, more helpful content.
Looking at the outraged public reaction to the news about CNET's AI articles, Gronsky realized that the company might have gone too far. In fact, he explained, he saw the whole scandal as a cautionary tale illustrating that Red Ventures shouldn't mark its AI-generated content for readers at all.
"Disclosing AI content is like telling the IRS you have a cash-only business," he warned.
Gronsky wasn't just concerned about CNET and Bankrate, though. He was also worried that a Google crackdown could impact Red Ventures' formidable portfolio of sites that target prospective college students, known internally as Red Ventures EDU.
The EDU division includes BestColleges.com, OnlineMBA.com, and Accounting.com, as well as numerous sites with domain names that imply they're nonprofits, including TheBestSchools.org, the nursing school-focused NurseJournal.org, the cybersecurity-focused CyberDegrees.org, the coding-focused ComputerScience.org, and the psych-focused Psychology.org. They all employ a similar model to CNET and Bankrate — except instead of funneling people with low financial literacy toward credit cards, they push would-be students toward costly college enrollments.
That business model can lead to some dark places. One client the EDU sites plow prospective students toward is Liberty University, a right-wing Christian institution that teaches creationism in science class, has been credibly accused of systematically burying sexual assault reports, and where a diversity supervisor allegedly referred to queer people as “abominations.” Another is the University of Phoenix, a for-profit online school that reached a $191 million settlement with the FTC over allegations that it had enticed students to take on staggering amounts of debt using fraudulent claims about employment opportunities for graduates.
According to the same former employee, the company's EDU portfolio appears to now be publishing immense volumes of AI-generated content — except, unlike with CNET or Bankrate, there's no disclosure at all.
Much of the AI's work, the source suspects, is being used to grind out the sites' avalanche of list-based articles, which carry titles engineered to scoop up Google traffic like "Best HBCUs for LGBTQ+ Students" and "Best Online Master's in Elementary Education Programs."
But the sites' use of AI may run far deeper, the former employee added.
"I would not put it past them to [use AI] for long-form articles," they said. "You're talking about a company who partners with Liberty University while also producing long DEI [diversity, equity, and inclusion] statements and stuff like that. There's not a lot of ethics there."
Did Red Ventures know about problems with its AI before deploying it? Drop a line to tips@futurism.com. We can keep you anonymous.
Kevin Hughes, the head of AI content for Red’s EDU division, is a true believer in the company's vision of sites populated largely by AI-generated content. On LinkedIn, for instance, he recently boasted that the company's "dirt cheap" AI can crank out articles so efficiently that it costs "less than a penny for 750 words" — a practice that he wrote was "generating millions of dollars in revenue."
In a prior career, according to LinkedIn, Hughes was an adjunct philosophy professor at several schools including Alvernia University and Albright College.
"I feel incredibly lucky to be able to take an apparently unrelated skill set to the corporate environment and find that it fits so neatly with company needs," he wrote of the transition. "It helps, too, that I've read my existentialists, and can identify and speak somewhat cogently about the human impact these systems are likely to have on all of us. It's both terrific and terrifying.”
Those impacts will be profound, according to Hughes.
"AI generative tools are gonna change the job market," he wrote. "I don't often play prophet, but in this case, I think I'm (kinda) qualified."
Some of what Hughes anticipates for AI might sound grim for both readers and writers, at least to a person outside Red Ventures' AI-powered empire. In fact, he can start to sound a lot like the spammers who were giddy with glee after CNET was caught publishing the AI-generated articles, reasoning that if such a prominent publisher could get away with the practice without sanction from Google, they could too.
"Copy writers [sic] building mostly rote, templated content in any market will need to reskill as copy editors," Hughes prognosticated. "The volume of copy will increase, but so with it a decline in quality."
In Hughes' vision, those remaining human employees will act as sort of high-tech pit bosses, crafting prompts for AIs that spew out vast amounts of written material. They'll "know good copy when they see it," he wrote, and make sure the resulting text is imbued with "editorial authority." They'll also have to develop new skills, like how to fix a "citation error" in "500 pages" at once.
"There's a lot I can say about those posts from Kevin," said the former Red Ventures employee. "Obviously, this idea of a bot utopia is not a realistic prediction. More likely you end up with an internet flooded with shit-tier SEO articles that are then using each other as datasets, so things just get worse and worse."
The performance of Red Ventures' AI so far would appear to support that interpretation.
One skill that Hughes didn't mention for his employees of the future, for instance, was making sure the AI's work was factually accurate. But it soon turned out that the CNET AI's work had been dismally flawed, littered with remedial errors about personal finance. After we reached out to CNET with questions about the bot's work, the site conducted an internal audit and eventually ended up issuing corrections on more than half of the 70-odd articles it had published using the AI.
Another characteristic of the CNET AI's work that Hughes didn't address on LinkedIn was originality. Scrutiny soon showed that to be another urgent issue at CNET, with the AI habitually cribbing sentences from human writers — often Red Ventures competitors, but sometimes sites it owns as well — near-verbatim, while shaking up the syntax a bit to throw plagiarism detectors off the scent. After we asked CNET about the issue, it edited numerous articles to replace what it called "phrases that were not entirely original."
But it's impossible to probe the quality of any AI-generated content on the EDU college recommendation sites that Hughes is responsible for, because nothing on those sites is currently labeled as AI-generated.
"The audience thinks that human beings are writing this stuff," the former Red Ventures employee said, "and giving them advice from experts, rather than something generated by AI."
In messages championing the AI-written articles, Hughes seems to support the idea that they comprise an appreciable portion of the education sites' overall content.
"So, just for funzies, I want y'all to see how quickly AI-generated content happens. I'm hoping it'll incentivize some of you to send some ideas our way," he wrote in a Slack message last year, before the controversy. He added that he and another employee had "created 90 pages of blurb content in about a day. Each of these 250 blurbs is unique."
Later, when the CNET AI scandal started to spark critical headlines, Hughes was perturbed.
"I choose violence," he thundered on Slack, proffering the argument that if "Google is building generative AI models" then "they can't vary [sic] well penalize us for using the same thing."
Still, like Gronsky, he seemed troubled about that exact possibility.
One potential pitfall, he vented, was that OpenAI could start "watermarking" its content, or peppering it with a secret signal that could be detected by the likes of Google. If so, Hughes complained, "any scrambling we do to the variations in programmatic content is still detectable as AI content."
And even if OpenAI doesn't take that route, he wrote, Red Ventures will still have to contend with third-party AI-detection tools, not to mention ones developed by Google itself.
"Even if they're not watermarking it, and don't ever plan to, plenty of systems already exist that can detect AI-written content with high accuracy," he wrote.
He also talked about ways Red Ventures could fool Google by mangling the AI-generated content further, seemingly referencing efforts that were already underway at the company.
"Scrambling the order of the sentences doesn't matter," he wrote. "The only way we might sneak by is by infusing data in the prompts (as we are planning), such that the data creates content that an AI detector might get confused by."
"But the reality," he lamented, "is that if there's 3-4 sentences in a row without data (and there will be), Google, and others, will be able to know we're using AIs."
Google's current position is that it's not limiting the visibility of online material in searches simply because it was generated using AI, but instead trying to promote content on the basis that it’s helpful to users.
"Our ranking team focuses on the usefulness of content, rather than how the content is produced," the company's public search liaison Danny Sullivan said after CNET's use of AI first came to light. "This allows us to create solutions that aim to reduce all types of unhelpful content in Search, whether it’s produced by humans or through automated processes."
He later clarified, though, that content "created with the aim of gaming search ranking — regardless of if it’s produced by humans or AI — is still spam and will be treated as such."
Those remarks reflect Google's push toward elevating "helpful content" in its search results. The search giant's documentation of what constitutes helpful content warns content creators against using "extensive automation to produce content on many topics," as well as against publishing material designed "primarily to attract people from search engines" or "producing lots of content on different topics in hopes that some of it might perform well in search results."
It’s difficult to imagine a reasonable person who could look at Red Ventures' secretive, factually bereft, plagiaristic AI content — which employees of the company are actively scheming to hide from both readers and Google — and conclude that it's helpful to users seeking accurate information. New York Times tech columnist Kevin Roose, for instance, quipped that CNET’s AI articles were "pink slime journalism," referring to the meat byproduct made by grinding up waste from animal carcasses.
It’s an apt analogy, and one that points toward a significant question now facing Google as it reckons with the rise of AI-generated content in its search results: is there a line that a company like Red Ventures could cross — or has already crossed — at which Google’s search team will signal that it's willing to defend the quality of its search results by enacting meaningful consequences?
The answer is unclear. Google waffled a bit, but ultimately didn’t comment for this story. The company is clearly concerned, though, that AI-generated content poses a threat to the reliability of its search results. The release of ChatGPT, for instance, reportedly spurred it to declare a "code red" in December of 2022.
In other words, there’s no question that people on Google’s search team are thinking hard about what it means that a company like Red Ventures can now pump out essentially unlimited amounts of barely-vetted AI content. As such, Red Ventures’ choice to push so far ahead of any other media company with AI can start to look seriously reckless — the sort of thing that could one day become a business school case study about a spectacular corporate implosion, triggered by adopting new technology too greedily, too lazily, and with too flagrant a disregard for its own customers.
But the former Red Ventures employee said the rush to AI feels characteristic of Red Ventures, which they said tends to "choke the golden goose."
"What it comes down to is the need to make as much SEO-focused content, as quickly as possible, for that very quick cash grab," they said. "Red Ventures is all about that. They want that quick cash grab as quickly as possible."
Lately, the company’s leadership seems to be moving toward a new strategy: abandoning the term "AI" to describe what it's doing with software made by a company called OpenAI.
Tim Merithew, who lists his job title as "senior director of product management, AI/ML," wrote in a recent message that "coming out of some convos with senior leadership, there is going to be an intentional effort to minimize the external and internal usage of the wording 'AI.'"
Going forward, Merithew explained — without addressing, remarkably, the fact that the term "AI" appears in his own job title — the company would instead be using the word "tooling."
"Generated by AI," he wrote, sounds different from "writer/editor had tooling to help them accomplish their role, some of that being AI powered."
The framing he prefers, he wrote, is "we give our talent tooling and some of those tools have ML/AI capabilities."
Merithew also alluded to efforts by the company to generate video using AI, writing that AI — or sorry, "tooling" — saying that Red Ventures has "saved [creatives] their time by using technology to help them deliver in their voice and likeness."
Whether that shift in vocabulary will be enough to stave off action from Google is tough to say. In a sense, that ambiguity illustrates an overall sense of peril looming over the nascent AI industry, into which power players like Microsoft are currently pouring untold billions. The tech is undeniably impressive, but huge unanswered questions about its little-tested legality, undeveloped norms, and checkered sense of ethics hang over it like swords of Damocles.
In a cynical way, Red Ventures is probably correct to understand that the tech is now mature enough to pump the internet full of profitable Potemkin content. In that sense, the company is visionary.
But at the same time, the practice is clearly unhelpful for readers, and actually seems likely to actively misinform them at the exact moment they’re looking for life-altering advice about money and higher education. What Red Ventures probably cares about more than that, though, is the classic weakness of parasites: at the end of the day, it’s dependent on its host for survival.
"Google could roll out an algorithm change that detects those AI posts, and then you get blacklisted," the former Red Ventures employee said. "And then what happens to your company?"
More on AI: Experts Warn of Nightmare Internet Filling With Infinite AI-Generated Propaganda
Share This Article