"There's no place on Facebook for terrorism. We remove terrorists and posts that support terrorism whenever we become aware of them," reads a new Facebook company blog post.
Aware that terrorists take advantage of social media to spread propaganda, Facebook on Thursday divulged some of its methods for combating the problem, including recent efforts to employ machine learning to automatically identify objectionable content.
"Our stance is simple: There's no place on Facebook for terrorism. We remove terrorists and posts that support terrorism whenever we become aware of them," states a company blog postauthored by Facebook officialsMonika Bickert, director of global policy management, andBrian Fishman, counterterrorism policy manager.
The post came shortly after news broke that James Hodgkinson, the man who on Wednesday shot and critically injured House Majority Whip Steve Scalise at a baseball practice, regularly posted angry extremist views on Facebook regarding President Donald Trump and other Republicans. That same Wednesday, Facebook removed Hodgkinson's online profiles, according to various reports.
Facebook on Thursday also acknowledged the controversy surrounding terrorists who use encrypted messaging platforms such as the company's WhatsApp service tosecurely communicate with each other. It was following the March 2017 Westminster terror attack that British home secretary Amber Rudd suggested that UK law enforcement must be able to listen in on WhatsApp conversations, after it was discovered that the attacker, Khalid Masood, used the service before murdering four people.
Defending encryption technology, the blog post notes that these services also have legitimate purposes such as protecting the privacy of journalists and activists. In their joint blog post, Bickert and Fishman wrote that while Facebook does not have the ability to read encrypted messages, "we do provide the information we can in response to valid law enforcement requests, consistent with applicable law and our policies."
Prior to Thursday's post, Facebook had not previously detailed its use of AI to root out terrorist activity on its platforms. According to the post, the company is focusing its most cutting-edge machine-learning techniques on curbing terrorist content submitted by ISIS, Al Qaeda, and related affiliates, adding that its efforts are "already changing the ways we keep potential terrorist propaganda and accounts off Facebook."
Facebook reported that its AI technology allows its systems to image-match photos or videos that have previously been linked to terrorism, and reject such forbidden content before it is displayed.
The company is also experimenting with natural language recognition capabilities in order to identity content that appears to advocate for terrorism. To that end, Facebook has been feeding previously flagged content toits AI engine so that it does a better job recognizing such language in the future.
Additionally, Facebook is using algorithms to determine if various pages, posts, profiles and groups likely support terrorism based on connections and shared attributions with other confirmed terrorist pages. The company also claims it is getting faster at detecting new fake accounts created by repeat offenders.
Facebook has also begun to apply these AI techniques to take down terrorist accounts additional platforms, including WhatsApp and Instagram. "Given the limited data some of our apps collect as part of their service, the ability to share data across the whole family is indispensable to our efforts to keep all our platforms safe," the blog post reads.
Outside the realm of AI, Facebook is also relying on its own human expertise to counter terrorism activity online, including its global Community Operations teams that review user complaints and reports, more than 150 terrorism and safety specialists, and a global team that was formed to promptly respond to emergency law enforcement requests. The company also relies in cooperation with industry partners, governments and various community groups and non-governmental organizations.
Companies are increasingly turning to AI and automation technologies to fight a variety of illegal and forbidden online activity. A new study released this week by cybersecurity and application delivery solution provider Radware found that 81 percent of surveyed executives reported that they either recently implemented or began more heavily relying on automated solutions. Moreover, 57% of these polled executives said that they trust automated systems as much or more than humans to protect their organizations. And 38 percent predicted that automated security systems would be the primary resource for managing cyber security within two years.
See the rest here:
Facebook defends encryption, says it is countering terrorism using AI - SC Magazine