Another election cycle, another social media platform threatening democracy.
According to a new report from Global Witness and NYU's Cybersecurity for Democracy team, TikTok is absolutely terrible at filtering out harmful misinformation regarding elections and politics.
The report actually tested TikTok, Facebook, and YouTube, all of which performed pretty poorly at detecting and removing misinformation-laden advertising content uploaded by researchers. Out of those, TikTok proved to be the worst. After uploading droves of ill-informed, potentially-dangerous advertisements for approval, the researchers found that 90 percent of those fake ads were ultimately approved by the popular video app.
Although the report is still preliminary, that's an alarming figure — especially considering how quickly TikTok's growth has outpaced that of other platforms in recent years, not to mention how wildly popular it is with younger people.
"This year is going to be much worse as we near the midterms," Olivia Little, a coauthor of the report, told The Guardian. "There has been an exponential increase in users, which only means there will be more misinformation TikTok needs to proactively work to stop or we risk facing another crisis."
Per the report, the fake advertisements ranged in what might be considered severity. Some, for example, contained misleading details like incorrect election dates; some included misleading or false voting requirements; others still used language that outright discouraged citizens from voting in the midterms at all.
And while a failure to filter out that much false — and therefore, inherently dangerous — material is a bad look for any platform, it seems especially so for one that's prided itself on its policies regarding election content and political advertisements. TikTok's loudly made clear that its policies don't allow for any paid political ads; any verified political accounts are automatically disqualified from using pay-to-play tools available to influencers, and just this past August, midterms in sight, the platform announced new-and-improved policy changes designed to tackle the misinformation threat.
"At TikTok, we take our responsibility to protect the integrity of our platform — particularly around elections — with the utmost seriousness," Erik Han, the app's head of US safety, wrote in an August blog post. "To bolster our response to emerging threats, TikTok partners with independent intelligence firms and regularly engages with others across the industry, civil society organizations, and other experts."
It's worth noting that this is the second time that TikTok has very explicitly come under fire for, um, threatening American democracy in recent days. On Thursday — the same day that this report was officially released — it was revealed that TikTok's parent company, ByteDance, had been planning on using the app's location data to spy on the physical locations of specific US citizens. (It's still unclear if they actually got around to it or not.)
As the Guardian points out, the app's remarkably tailored algorithm is inextricably linked to its misinformation failures. Like any of the platform's popular dance trends, misinformation can go very viral, very quickly, as was recorded in a report from the nonprofit Mozilla during Kenya's contentious August elections.
That being said, though, TikTok's virality-happy algorithm is central to app's success. And if that's how it keeps eyeballs on its pages, it's unlikely the company will make any serious changes that jeopardize that business reality — even if, at the same time, it's struggling to keep up with the spread of mis- and disinformation.
"If the TikToks of the world really want to fight fake news, they could do it," Helen Lee Bouygue, who heads a media literacy platform called Reboot Foundation, told the Guardian. "But as long as their financial model is keeping eyes on the page, they have no incentive to do so. That's where policymaking needs to come into play."
READ MORE: 'We risk another crisis': TikTok in danger of being major vector of election misinformation [The Guardian]
Share This Article