"We really believed in social experiences. We really believed in protecting privacy. But we were way too idealistic. We did not think enough about the abuse cases," Facebook COO Sheryl Sandberg admitted to NPR.

Yes, Facebook has a hate speech problem.

After all, how could it not? For many of Facebook's 2 billion active users, the site is the center of the internet, the place to catch up on news and get updates from friends. That make it a natural target for those looking to persuade, delude, or abuse others online. Users flood the platform with images and posts that exhibit racism, bigotry, and exploitation.

Facebook has not yet found a good strategy to deal with the deluge of hateful content. A recent investigation showed this to be true. A reporter from Channel 4 Dispatches in the United Kingdom went undercover with a Facebook contractor in Ireland and discovered a laundry list of failures, most notably that violent content stays on the site even after users flag it, and that "thousands" of posts were left unmoderated well after Facebook's goal of a 24 hour turnaround. Tech Crunch took a more charitable view of the investigation's findings, but still found Facebook "hugely unprepared" for the messy business of moderation. In a letter to the investigation's producer, Facebook vowed to quickly address the issues it highlighted.

Tech giants have primarily relied on human moderators to flag posts that were problematic, but they are increasingly turning algorithms to do so. They're convinced it's the only way forward, even as they hire more humans to do the job AI isn't ready to do itself.

The stakes are high; poorly moderating hate speech has tangible effects on the real world. UN investigators found that Facebook had failed to curb the outpouring of hate speech targeting Muslim minorities on its platform during a possible genocide in Myanmar. Meanwhile, countries in the European Union are closer to requiring Facebook to curb hate speech, especially hurtful posts targeting asylum seekers — Germany has proposed laws that would tighten regulations and could fine the social platform if it doesn't follow them.

Facebook has been at the epicenter of the spread of hate speech online, but it is, of course, not the only digital giant to deal with this problem. Google has been working to keep videos promoting terrorism and hate speech off YouTube (but not fast enough, much to the chagrin of big-money advertisers whose ads showed up right before or during these videos). Back in December, Twitter started banning accounts associated with white nationalism as part of a wider crackdown on hate speech and abusive behavior on its platform. Google has also spent a ton of resources amassing an army of human moderators to clean up their platforms, while simultaneously working to train algorithms to help them out.

Keeping platforms free of hate speech is a truly gargantuan task. 600,000 hours of new video are added to YouTube, and Facebook users upload some 350 million photos every day.

Most sites use algorithms in tandem with human moderators. These algorithms are trained by humans first to flag the content the company deems problematic. Human moderators then review what the algorithms flag — it's a reactive approach, not a proactive one. “We’re developing AI tools that can identify certain classes of bad activity proactively and flag it for our team at Facebook,” CEO Mark Zuckerberg told Senator John Thune (R-SD) during his two-day grilling in front of Congress earlier this year, as Slate reported. Zuckerberg admitted that hate speech was too "linguistically nuanced" for AI at this point. He suspected it'll get there in about five to ten years.

But here's the thing: There's no one right way to eradicate hate speech and abusive behavior online. Tech companies, though, clearly want algorithms to do the job, with as little human input as possible. “The problem cannot be solved by humans and it shouldn’t be solved by humans,” Google's chief business officer Philipp Schindler told Bloomberg News.

AI has become much better at picking out the hate speech and letting everything else go through, but it's far from perfect. Earlier this month, Facebook's hate speech filters decided that large sections of an image of the Declaration of Independence were hate speech, and redacted chunks of it from a Texas-based newspaper that posted it on July 4th. A Facebook moderator restored the full text a day later, with a hasty apology thrown in.

Part of the reason it's so hard to get algorithms talk like humans, and even debate human opponents effectively is that algorithms still get caught up on context, nuance, and recognition of intent. Was it a sarcastic comment or purely commentary? The algorithm can't really tell.

A lot of the tools used by Facebook's moderation team shouldn't even be referred to as "AI" in the first place, according to Daniel Faltesek, assistant professor of social media at Oregon State University. "Most systems we call AI are making a guess as to what users mean. A filter that blocks posts that use an offensive term is not particularly intelligent," Faltesek tells Futurism.

Effective AI would be able to highlight problematic content not just by scanning for a combination of letters, but instead respond to users' shifting sentiment and emotional values. But we don't have that yet. So humans, it seems, will continue to be a part of the solution — at least until AI can do it by itself. Google is planning on hiring more than 10,000 people this year alone, while Facebook wants to ramp up its human moderator army to 20,000 by the end of the year.

In a perfect world, every instance of hate speech would be thoroughly vetted. But Facebook's user base is so enormous that even 20,000 human moderators wouldn't be enough (since there are 2 billion people on the platform, each human moderator look after some 100,000 accounts, which, we've learned is simply maddening work).

The thing that would work best according to Faltesek? Pairing up these algorithms with human moderators. It's not all that different from what Facebook is working on right now, but it's gotta keep humans involved. "There is an important role for human staff in reviewing the current function of systems, training new filters, and responding in high-context situations when automated systems fail," says Faltesek. "The best world is one where people are empowered to do their best work with intelligent systems."

There's a trick to doing this well, a way for companies to maintain control of their platforms without scaring away users. "For many large organizations, false negatives are worse than false positives," says Faltesek. "Once the platform becomes unpleasant it is hard to build up a pool of good will again." After all, that's what happened with MySpace, and it's why you're probably not on it anymore.

Hate speech on social media is a real problem with real consequences. Facebook knows now it can't sit idly by and let it take over its platform. But whether human moderators paired with algorithms will be enough to quell the onslaught of hate on the internet is still very uncertain. Even an army of 20,000 human moderators won't improve their odds, but as of right now, it's the best shot they have. And after what Zuckerberg called a "hard year" for the platform with a lot of soul searching, now's the best time to get it right.


Share This Article