A social network so friendly it can't discern between a business and a terrorist cell.

Poor Taste

If you still use Facebook, you're probably familiar with those algorithmically-produced "year in review" videos that the website churns out, setting a slideshow of your posts to cheerful music. At best, the videos are obnoxious.

At worst? They spew violent, hateful content. Researchers represented by the National Whistleblower Center found that Facebook automatically created a peppy, upbeat video on a "business" page for Al-Qaeda, according to The Associated Press.

Algorithmic Whiff

Facebook talks a big game about its algorithmic hate speech filters.

"After making heavy investments, we are detecting and removing terrorism content at a far higher success rate than even two years ago," Facebook told the AP. "We don't claim to find everything and we remain vigilant in our efforts against terrorist groups around the world."

Meanwhile, those allegedly-sophisticated efforts missed a business profile named "Al-Qaeda" as well as hundreds of other personal profiles named after terrorist leaders that were identified in the whistleblowers' study.

Fundamentally Flawed

According to the study, Facebook's automatic features contributed to pages that had thousands of likes, and only removed about 38 percent of posts that prominently featured hate symbols. Sometimes the extremists used Facebook tools to create their own media, meaning their images completely flew under the radar.

"The whole infrastructure is fundamentally flawed," UC Berkeley digital forensics expert Hany Farid told the AP. "And there's very little appetite to fix it because what Facebook and the other social media companies know is that once they start being responsible for material on their platforms it opens up a whole can of worms."

READ MORE: Facebook auto-generates videos celebrating extremist images [The Associated Press]

More on Facebook: Facebook Needs Humans *And* Algorithms To Filter Hate Speech


Share This Article