As generative AI has exploded into the mainstream, both excitement and concern have quickly followed suit. And unfortunately, according to a collaborative new study from scientists at Stanford, Georgetown, and OpenAI, one of those concerns — that language-generating AI tools like ChatGPT could turn into chaos engines of mass misinformation — isn't just possible, but imminent.

"These language models bring the promise of automating the creation of convincing and misleading text for use in influence operations, rather than having to rely on human labor," write the researchers.  "For society, these developments bring a new set of concerns: the prospect of highly scalable — and perhaps even highly persuasive — campaigns by those seeking to covertly influence public opinion."

"We analyzed the potential impact of generative language models on three well-known dimensions of influence operations — the actors waging the campaigns, the deceptive behaviors leveraged as tactics, and the content itself," they added, "and conclude that language models could significantly affect how influence operations are waged in the future."

In other words, the experts found that language-modeling AIs will undoubtedly make it easier and more efficient than ever to generate massive amounts of misinformation, effectively transforming the internet into a post-truth hellscape. And users, companies, and governments alike should brace for the impact.

Of course, this wouldn't be the first time that a new and widely-adopted technology has thrown a chaotic, misinformation-laden wrench into world politics. The 2016 election cycle was one such reckoning, as Russian bots made a valiant effort to disseminate divisive, often false or misleading content as a means of disrupting an American political campaign.

But while the actual efficacy of those bot campaigns have been debated in the years since, that technology is archaic compared to the likes of ChatGPT. While still imperfect — the writing tends to be good but not great, and the information it provides is often wildly wrong — ChatGPT is still remarkably good at generating convincing-enough, confident-sounding content. And it can produce that content at astonishing scale, eliminating almost all of the need for time-consuming, costlier human effort.

Thus, with the incorporation of language modeling systems, misinformation is cheap to keep on a constant churn — making it likely to do a lot more harm, a whole lot faster, and more reliably to boot.

"The potential of language models to rival human-written content at low cost suggests that these models — like any powerful technology — may provide distinct advantages to propagandists who choose to use them," reads the study. "These advantages could expand access to a greater number of actors, enable new tactics of influence, and make a campaign's messaging far more tailored and potentially effective."

The researchers do note that because AI and misinformation are both changing so quickly, their research is "inherently speculative." Still, it's a grim picture of the internet's next chapter.

That said, the report wasn't all doom and gloom (though there's certainly a lot of both involved). The experts also outline a few means of hopefully countering the new, AI-driven misinformation dawn. And while these, too, are imperfect, and in some cases perhaps not even possible, they're still a start.

AI companies, for example, could follow tighter development policies, their products ideally protected from reaching market until proven guardrails like watermarks are installed in the tech; meanwhile, educators might work to promote media literacy in the classroom, a curriculum that'll hopefully grow to include understandings of the subtle cues that might give something away as AI-made.

Distribution platforms, elsewhere, might work to develop a "proof of personhood" feature that goes a bit more in-depth than a "check this box if there's a donkey eating ice cream in it" CAPTCHA. At the same time, those platforms could work to develop a department that specializes in identifying and removing any AI-employing bad actors from their respective sites. And in a slightly Wild West turn, the researchers even suggest employing "radioactive data," a complicated measure that would involve training machines on traceable data sets. (As it probably goes without saying, this "nuke-the-web plan," as Casey Newton of Platformer put it, is extremely risky.)

There would be learning curves and risks to each of these proposed solutions, and none can fully combat AI misuse on their own. But we have to start somewhere, especially considering that AI programs seem to have a pretty serious headstart.

READ MORE: How 'radioactive data' could help reveal malicious AIs [Platformer]


Share This Article