Challenges of Combating Terrorism

On the Sunday following the UK terrorist attack on London Bridge, British Prime Minister Teresa May leveled a portion of the blame at social media sites in a televised address, saying "We cannot allow this ideology the safe space it needs to breed. Yet that is precisely what the internet — and the big companies that provide internet-based services — provide.”

It is the latest blow in a long spar between civil institutions and privately owned social media companies. Three lawsuits have been filed against social media giants in the last year by the families of the victims of Pulse Nightclub in Florida, the events in Paris in November 2015, and the San Bernadino attack in December 2015. But even when social media outlets take on the responsibility of combating terrorism-inciting content, they face many challenges.

First, the problem of the content's form — which is usually video. There have been successful auto-content blockers developed for photographs, but video poses an entirely new problem. While photos could be relatively easily scanned, videos contain hundreds of individual frames, which makes scanning them far harder.

The quantity of these videos is also daunting. On Facebook alone, around 100 million hours of video are watched daily, which is a huge amount to moderate. The problem is amplified by copies and shares, all of which can be reposted even if the original is removed. If there is no adequate software solution for monitoring these videos, the becomes an almost impossible feat — requiring humans to look for virtual needles in the some of the biggest data haystacks the world has ever known.

Click to View Full Infographic

On top of this, social media companies must also balance safety against privacy and human rights. A press release from Facebook, Microsoft, Twitter, and YouTube in December, 2016, reiterated their commitment "to prevent the spread of terrorist content online while respecting human rights.” This shows the tension between moderation and surveillance — where does the line lie between "terrorist content" and free speech? And how many of our civil liberties concerning privacy are we willing to sacrifice for the unguaranteed promise of being safer?

This is complicated further by company policies of free speech and documentation. Youtube's policy states that they permit media “intended to document events connected to terrorist acts or news reporting on terrorist activities” as long as those clips include “sufficient context and intent.” This creates a thin line that automated software would find extremely difficult to traverse, even if sufficient video filtering software was developed.

New War, New Weapons

In order to save countless lives from terrorist attacks, which are becoming ever more frequent, new tools must be developed. But for now, the strongest tool we have against terrorist content online is our own diligence. While Mark Zuckerberg is hiring 3,000 people to find and block violent videos, managing such a large amount of content is only possible with the support of an equally large community that will report terrorist content.

Marking tools are also being developed to tackle problem of terrorists using the video format. The Counter Extremism Project (CEP) has developed eGLYPH, which finds the unique signature for each video or video segment called a "hash" and then reports any usage it finds during scans to the relevant moderator.

The most comprehensive of these databases is CEP's own. Facebook, Microsoft, Twitter and YouTube, though, have refused the free database access and free use of the eGLYPH tool, choosing instead to form their own consortium. But any development made here will not apply to live-streamed video, which creates content real-time and so could not be mitigated by any database. This has already become a reality with the case of the terrorist Larossi Abballa.

In the near future, we may have artificial intelligence managing the process. Zuckerberg claimed in a letter that Facebook is working on algorithms that would be able to "tell the difference between news stories about terrorism and actual terrorist propaganda," as well as identify other inappropriate content. While Google's AI and Microsoft's and Elon Musk's OpenAI may be used in a similar manner, no official announcement has been made yet.


Share This Article