Under the leadership of its new CEO Elon Musk, Twitter has settled on keeping its paid "Verified" program via a Twitter Blue subscription, despite plenty of heavy scrutiny ever since Musk took over in November.
While the platform now includes different colored checkmarks for businesses and governments, the blue checkmark — which once indicated that a user's identity had been authenticated — remains a free-for-all, and is now being taken advantage of by nefarious parties armed with AI technologies.
As spotted by Twitter user conspirator0, a swath of "verified" Twitter accounts are sporting AI-generated faces as their profile pictures while pretending to be real people.
And many of them, according to conspirator0's findings, "push specific political agendas," both left and right-leaning — though mostly the latter.
One account under the now suspended handle of cortez_santiage described themselves as a "nationalist," a "paleo-conservative," "anti-liberal," and "anti-cringe." Another found by conspirator0, formerly under the username of Kenoisseur, campaigned to share so-called evidence of "the genocide of whites in America."
Others can be more innocuous, like claiming to be a Harvard grad epidemiologist.
It's unclear how many of these are straight-up bots or anonymous, perfidious humans trying to maintain a more credible face — but our best guess is that it's a mix of both.
"Allowing accounts with fake faces to be 'verified' without even requiring the operators to disclose that the 'face' is artificially generated is a blatantly pro-deception stance," conspirator0 wrote in a tweet.
Many of the accounts, which date back to November — right after Musk's takeover — were eventually suspended. But conspirator0 has since dug up more verified accounts with AI faces that were neither suspended nor deprived of their "verified" status — only using a quick and simple Twitter search of inputting "filter:blue_verified" and tacking on common English words.
Then, to root out the suspect accounts, conspirator0 looked for the telltale signs of faces synthesized using a generative adversarial network (GAN), which they note is used in popular tools like This Person Does Not Exist.
The most prominent and distinguishing feature of unmodified GAN-generated faces is the unmoving placement of the eyes. If you overlay multiple GAN-generated faces, it becomes clear that the portraits weren't naturally taken and cropped. In other words, the eyes almost never deviate.
Other indicators include wonky glasses, nonsensical clothing, and distorted secondary faces in the frame. At least one study has identified inconsistent specular highlights in the corneas of the eyes as an ultimate giveaway.
But in all likelihood, this is simply the tip of the iceberg. These giveaways only apply to unmodified GAN-generated faces. If someone went to the effort of manually fine-tuning them, even just by a touch, they could be even harder to detect.
It's an especially worrying trend since "Legacy" verified accounts — accounts that were verified under the old program that required users to corroborate their identities — still maintain the same blue checkmark.
That runs the risk of having those who are only taking a cursory look at these profiles mistaking them for real people, whether they paid for Twitter Blue or were legacy verified.
Conspirator0 also cites a study published last year in the journal Proceedings of the National Academy of Sciences that found AI face generators "have passed through the uncanny valley and are capable of creating faces that are indistinguishable — and more trustworthy — than real faces."
Anecdotally, of course, you can find many botched likenesses that indicate the contrary, but the most convincing of them will likely go largely unnoticed, and the mere fact that these AI faces look more like professional headshots than terrible selfies also lends them an undue sense of credibility.
In short, what we're witnessing is a confluence of AI's emerging, widespread popularity, its propensity to be abused to spread misinformation, and Musk's decision to let just about anyone brandish a status-signaling badge.
Yet, admittedly, the badges are arguably the least worrying aspect of this developing trend. Once (or if) widely accessible AIs are competent enough, they won't need trivial, digital badges to feign credibility — or maintain a facade of humanity.
More on AI: Shameless Realtors Are Already Grinding Out Property Listings With ChatGPT