When you give people a channel to send out live videos, some will use it to broadcast the most horrendous, disgusting things imaginable. Facebook has learned this first hand; the site has become infamous for livestreams of violent acts, murder, and even suicide.

To keep these kinds of violent videos off the platform, Facebook introduced systems, both AI systems and human moderators, to scan for violent broadcasts and take them down. Gradually, the company has relied more heavily on its AI moderators.

Facebook, along with other online platforms, has been developing these violence-flagging algorithms for a while. But now Facebook has plans to develop its own AI hardware — a chip that hosts the AI. The biggest advantage of a chip? They require much less computing power, which would lead to much faster algorithms, according to Bloomberg. As it is now, the AI filters can catch violent videos in about ten minutes on average, though sometimes they stay on the site for hours. Ideally, though, Facebook would like to take those livestreams down as they're happening. There's no telling whether a chip like this would get them to that goal, but it would almost definitely get them closer to it.

This seems like a good investment for a company that projects a more-or-less wholesome, family-friendly social network. And if their video-catching tools do drastically improve how quickly videos of suicides and murders are flagged, some of those violent acts may not even happen since the perpetrators wouldn't have an audience. That's the same logic that stops the media from glorifying serial killers.

But what remains unclear is how Facebook will train its algorithm to either flag or permit violent videos posted by activists of bystanders to raise awareness of violent acts perpetrated by others. Like, say, to monitor police violence.

When Jeronimo Yanez, a then-police officer shot and killed Philando Castile in 2017, Castile’s girlfriend filmed the event on Facebook Live. Facebook couldn’t decide what to do with the footage, first deleting, then re-uploading it with graphic content warnings before ultimately removing it again. Facebook's logic was that videos raising awareness of violence would be allowed, while those celebrating it would be deleted.

In theory, this means that watchdogs ought to be able to continue holding people accountable via Facebook Live, but its unclear that Facebook’s growing reliance on AI to flag violence will take into account the nuanced context for each violent video. So it's possible that the company’s new filters could do away with violent broadcasts altogether, no matter their purpose.


Share This Article