In times when everything had been digitized, even terrorist groups use the Internet and social media as one of the means to get their message across. As a result, Facebook and YouTube, two of the most popular websites for viewing videos, are filled with extremist content. And now reportedly, both these powerhouses are silently removing extremist content from their respective platforms using automation.
The automated technique being used to take down extremist content has been around for a while. It was previously used to remove copyright-protected material. What this technique essentially does is, it looks for a unique finger-print that is often embedded in videos. And content belonging to a particular category shares that unique fingerprint, allowing all related content to be rapidly removed.
YouTube and Facebook, being the platforms with the most reach, are being pressurized by government groups from around the world to embrace these systems and help them in eradicating terrorism. Apart from claims from sources, they have not yet confirmed it themselves. The involved companies are, however, hesitant to use the system because they want little outside intervention with their policies.
Furthermore, the automated system has so far been used for pornography videos or other copyright content. The case with extremist videos varies a little. So far such videos have only been caught after users flagged them or reported them as inappropriate. In such a case, the system would have to be altered to some extent and maybe even involve humans in the entire censorship process.
Companies who have so far embraced the system to remove extremist content from their sites are not coming forth and openly discussing it. The two main reasons being, why would they want to brag about censorship and most importantly, they don’t want terrorists to start manipulating their systems.