Facebook Inc. (FB), facing calls to do more to stop terrorist propaganda on its social networks, announced it’s been using artificial intelligence to help it block and remove offensive content and posts.

In a blog post, Monika Bickert, director of global policy management, and Brian Fishman, counterterrorism policy manager, said in the wake of the recent terrorist attacks, the latest in London, it has been facing questions about the role it plays in fighting terrorism and agrees that “social media should not be a place where terrorists have a voice.” As a result, it provided a peek into what it is doing to combat the spread of hate with AI. (See also: Facebook Praised by EC for Quashing Hate Speech.)

Tech to Quash Terrorism

The executives said Facebook is focusing on finding terrorist content immediately, and is using AI to that end. For example, the company said it is deploying image matching technology that automatically looks for matches of known terrorism photos or videos when someone tries to upload the content. That means that if Facebook had previously removed an ISIS propaganda video, the system prevents others from uploading the same video. AI is also being used to understand text that may call for or advocate terrorism. The company said it’s in the midst of analyzing text that was already removed that praised the likes of ISIS and Al Qaeda so it can develop text-based signals to block similar the content in the future. Facebook said it’s using an algorithm that’s in the early stages of learning how to find similar posts. (See also: Facebook Declares Total War on Fake News.)


Since terrorists tend to cluster in groups offline and online, the social media network operator said its identifying pages, groups, posts and profiles of users supporting terrorism to block them and is leaning on algorithms to attempt to identity related material that could be in support of terrorism. “We use signals like whether an account is friends with a high number of accounts that have been disabled for terrorism, or whether an account shares the same attributes as a disabled account,” the executives said in the post, adding the company is getting better and faster at detecting fake accounts created by repeat offenders. As a result, Facebook has been able to “dramatically” reduce the time period that recidivist accounts are on the platform. Facebook is also working on systems for WhatsApp and Instagram, the executives said. 

Want to learn how to invest?

Get a free 10 week email series that will teach you how to start investing.

Delivered twice a week, straight to your inbox.