Facebook dabbling with 'Artificial Intelligence' to remove terrorist content

Posted June 16, 2017

After facing criticism from European Union leaders following a string of terrorist attacks in the UK, Facebook on Thursday outlined the ways it's stepping up its efforts to curb extremist content on its social network, including its use of artificial intelligence.

"This work is never finished because it is adversarial, and the terrorists are continuously evolving their methods too".

The post comes after United Kingdom Prime Minister Theresa May announced her goals to challenge internet and tech companies to play a more active role in counterterrorism.

Recently, Facebook has been bombarded with criticism from social media users for not doing enough to keep extremist content off of social media platforms.

They described how the network is automating the process of identifying and removing jihadist content linked to the Islamic State group, al-Qaeda and their affiliates, and intends to add other extremist organizations over time. Earlier this month, British Prime Minister Theresa May called on governments to form global agreements to prevent the spread of extremism online.

The only downside is that this new form of image scanning doesn't prevent people from joining Facebook, then using it to communicate with and ultimately recruit others using messaging.

"Encryption technology has many legitimate uses, from protecting our online banking to keeping our photos safe". "Already, the majority of accounts we remove for terrorism we find ourselves".

The effort extends to other Facebook applications, including WhatsApp and Instagram, according to Bickert and Fishman.

Top Putin critic arrested amid protest crackdown
Navalny, Russia's most prominent opposition figure, has been arrested numerous times for a variety of accused crimes. His report on alleged corruption connected to Prime Minister Dmitry Medvedev was the focus of the March protests.

Our stance is simple: "There's no place on Facebook for terrorism", Bickert and Fishman wrote.

This is analysing text previously removed for praising or supporting a group such as IS and trying to work out text-based signals that such content may be terrorist propaganda.

Britain's interior ministry welcomed Facebook's efforts but said technology companies needed to go further. They said they are now training this system using text the company has previously removed for promoting terrorism.

The company now employs more than 150 people who are "exclusively or primarily focused on countering terrorism as their core responsibility".

"We're constantly identifying new ways that terrorist actors try to circumvent our systems - and we update our tactics accordingly", Bickert and Fishman said. It's also working with other social media companies to create a shared database of these digital signatures - known as hashes - to ensure that people can't simply post the same content to Twitter or YouTube. But the company doesn't use technology to screen new content for policy violations, saying computers lack the nuance to determine whether a previously uncategorized video is extremist.

In the post, Bickert and Fishman admit "AI can't catch everything".

The blog post also highlighted Facebook's efforts to fund and train anti-extremist groups to produce counternarratives, or online content created to undercut terrorist propaganda and dissuade people from joining terrorist groups.