During the 2016 U.S. election, an estimated one in every five tweets about the election were from bots. They were also promoting far-right candidates in France and Germany in 2017, and now, are spreading false information in Ireland ahead of a contentious referendum on abortion. This keeps happening. Why?
Or rather: Why isn't anything being done to prevent bots from interfering with voting?
Governments and media platforms are (ostensibly) trying. Federal and state authorities have begun looking into the proliferation of fake Twitter users, and a Russian bot farm was even indicted by the U.S. special investigation into the election. Twitter has been cracking down on bots by changing its software, and in January removed the over 50,000 Russian-linked accounts that posted automated messages during the election (though we can't help but feel that's too little, too late). Facebook did the same with several hundred fake accounts in September 2017.
In Ireland, officials are trying to introduce legislation that would require social media companies verify that anyone taking out a political ad is a real person, and share that information both with regulators and alongside the ad itself. Unfortunately, if it passes, that legislation likely won't come into affect until after the referendum is over. Yet lawmakers also feel the law will be relevant to prevent foreign influence in future elections.
While government legislation moves frustratingly slow in comparison to the speed of news, it's that kind of future-forward thinking that we're going to need if we want to get our bot problem under control. Government action could be the only way to get social media platforms to implement broader countermeasures; as seen in the case of upcoming EU privacy laws, it's often too difficult for platforms to cherry pick in which countries their settings apply, so legislation in one country can change a platform for everyone.
Facebook already has plans to verify ad buyers in a very old-fashioned way for the upcoming U.S. midterm elections, but those rules seem to only be applied in the U.S.
All of which is to say: Platforms are, understandably, reluctant to come up with rules that would make it harder for people to spend money on their sites, or that limit the growth of new users. Yet those platforms also rely on all of us that trust them to make social media a safe and trustworthy place to
waste spend time. We can't necessarily make online political discourse civil; but with enough pressure from users, we might be able to at least make it human.