It's an ugly reality we see in every corner of the web: racism, bigotry, misogyny, political extremism. Hate speech seems to thrive on the internet like a cancer.

It persists and flourishes on social media platforms like Facebook, Twitter, and Reddit — they certainly don't claim to welcome it, but they're having a hell of a time keeping it in check. No AI is yet sophisticated enough to flag all hate speech perfectly, so human moderators have to join the robots in the trenches. It's an imperfect, time-consuming process.

As social media sites come under increasing scrutiny to root out their hate speech problem, they also come up against limits for how much they can (or will) do. So whose responsibility is it, anyway, to mediate hate speech? Is it up to online platforms themselves, or should the government intervene?

The British government seems to think the answer is both. The Home Office and the Department of Digital, Culture, Media, and Sports (DCMS) — a department responsible for regulating broadcasting and the internet — is drafting plans for regulation that would make platforms like Facebook and Twitter legally responsible for all the content they host, according to Buzzfeed News.

In a statement to Futurism, the DCMS says that it has "primarily encouraged internet companies to take action on a voluntary basis." But progress has been too slow — and that's why it plans to intervene with "statutory intervention."

But is this kind of government intervention really the right way forward when it comes to hate speech online? Experts aren't convinced it is. In fact, some think it may even do more harm than good.

Details on about DCMS' plan are scant — it's still early in development. What we do know so far is that the legislation, Buzzfeed reports, would have two parts. One: it would introduce "take down times" — timeframes within which online platforms have to take down hate speech, or face fines. Two: it would standardize age verification for Facebook, Twitter, and Instagram users. A white paper detailing these plans will allegedly be published later this year.

Why should the government intervene at all? Internet platforms are already trying to limit hate speech on their own. Facebook removed more than 2.5 million pieces of hate speech and "violent content" in the first quarter of 2018 alone, according to a Facebook blog post published back in May.

Indeed, these platforms have been dealing with hate speech for as long as they've existed. "There's nothing new about hate speech on online platforms," says Brett Frischmann, a professor in Law, Business and Economics at Villanova University. The British government might be trying to put in a law to stop hate speech too quickly to come up with anything that will work the way it's supposed to.

Unfortunately, hate speech is a whack-a-mole that moves far faster than publishers seem to be able to. As a result, a lot of it goes unmediated. For instance, hate speech from far right extremist groups in the U.K. often still falls through the cracks, fueling xenophobic beliefs. In extreme cases, that kind of hate speech can lead to physical violence and the radicalization of impressionable minds on the internet.

Image Credit: Pathum Danthanarayana

Jim Killock, executive director for the Open Rights Group in the U.K. — a non-profit committed to preserving and promoting citizens' rights on the internet — thinks the legislation, were it to pass tomorrow, wouldn't be just ineffective. It might even prove to be counterproductive.

The rampant hate speech online, Killock believes, is symptomatic of a much larger problem. "In some ways, Facebook is a mirror of our society," he says. "This tidal wave of unpleasantness, like racism and many other things, has come on the back of [feeling] disquiet about powerlessness in society, people wanting someone to blame."

Unfortunately, that kind of disillusionment with society won't change overnight. But when a policy only addresses the symptoms of systemic injustice instead of the actual issues, the government is making a mistake. By censoring those who feel like they are being censored, the government is reinforcing their beliefs. And that's not a good sign, especially when those who are being censored are actively spreading hate speech online themselves.

Plus, a law like the one DCMS has proposed would effectively make certain kinds of speech illegal, even if that's not what the law says. Killock argues that while a lot of online material may be "unpleasant," it often doesn't violate any laws. And it shouldn't be to companies to decide where the line between the two lies, he adds. "If people are breaking the law, it frankly is the job of courts to set those boundaries."

But there's good reason to avoid redrawing those legal boundaries for what kind of behavior online should be enforced (even if it is technically not illegal). The government might have to adjust much wider sweeping common law that concerns the freedom of speech. That is probably not going to happen.

The UK government's plans are still in the development stage, but there are already plenty of reasons to be skeptical that the law would do what the government intends. Muddying the boundaries between illegal and non-illegal behavior online sets a dangerous precedent, and that could have some undesirable consequences — like wrongfully flagging satirical content as hate speech for instance.

The DCMS is setting itself up for failure: censoring content online will only embolden its critics, while failing to address the root issues. It has to find that middle ground if it wants a real shot: too much censorship, and the mistrust of those who feel marginalized will keep building. Too little regulation, and internet platforms will continue to make many users feel unwelcome or lead to violence.

The U.K. government has a few tactics it could try before it decides to regulate speech online. The government could incentivize companies to strengthen the appeal process for taking down harmful content. "If you make it really hard for people to use appeals, they may not use them at all," Killock argues. For instance, the government could introduce legislation that would ensure each user has a standardized way of reporting problematic content online.

But it will take a much bigger shift before we are able to get rid of hate speech in a meaningful way. "Blaming Facebook or the horrendous people and opinions that exist in society is perhaps a little unfair," Killock says. "If people really want to do and say these [hurtful] things, they will do it. And if you want them to stop, you have to persuade them that it's a bad idea."

What do those policies look like? Killock doesn't have the answer yet. "The question we have really is, how do we make society feel better about itself?" says Killock. "And I'm not pretending that that's a small thing at all."

More on regulating speech online: Social Media Giants Need Regulation From a Government That’s Unsure How To Help


Share This Article