Google to ramp up AI efforts to ID extremism on YouTube – TechCrunch

Posted: June 19, 2017 at 7:17 pm

Last week Facebook solicited help with what it dubbed hard questions including how it should tackle the spread of terrorism propaganda on its platform.

Yesterday Google followed suit with its own public pronouncement, via an op-ed in the FTnewspaper, explaining how its ramping up measures to tackle extremist content.

Both companies have been coming under increasing political pressure in Europe especially to do more to quash extremist content with politicians including in the UK and Germany pointing the finger of blame at platforms such as YouTube for hosting hate speech and extremist content.

Europe has suffered a spate of terror attacks in recent years, with four in the UK alone since March. And governments in the UK and France are currently considering whether to introduce a new liability for tech platforms that fail to promptly remove terrorist content arguing that terrorists are being radicalized with the help of such content.

Earlier this month the UKs prime minister also called for international agreements between allied, democratic governments to regulate cyberspace to prevent the spread of extremism and terrorist planning.

While in Germany a proposal that includes big fines for social media firms that fail to take down hate speech has already gained government backing.

Besides the threat of fines being cast into law, theres an additional commercial incentive for Google after YouTube faced an advertiser backlash earlier this yearrelated to ads being displayed alongside extremist content, with several companies pulling their ads from the platform.

Google subsequentlyupdated the platforms guidelinesto stop ads being served to controversial content, including videos containing hateful content and incendiary and demeaning content so their makers could no longer monetize the content via Googles ad network. Although the company still needs to be able to identify such content for this measure to be successful.

Rather than requesting ideas for combating the spread of extremist content, as Facebook did last week, Google is simply stating what its plan of action is detailingfour additional steps it says its going to take, and conceding that more action is needed to limit the spread of violent extremism.

While we and others have worked for years to identify and remove content that violates our policies, the uncomfortable truth is that we, as an industry, must acknowledge that more needs to be done. Now, writesKent Walker, general counselGoogle in a blog post.

The four additional steps Walker lists are:

Despite increasing political pressure over extremism and the attendant bad PR (not to mention threat of big fines) Google is evidently hoping to retain its torch-bearing stance as a supporter of free speech by continuing to host controversial hate speech on its platform, just in a way that means it cant be directly accused of providing violent individuals with a revenue stream. (Assuming its able to correctly identify all the problem content, of course.)

Whether this compromise will please either side on the remove hate speech vs retain free speech debate remains to be seen. The risk is it will please neither demographic.

The success of the approach will also stand or fall on how quickly and accurately Google is able to identify content deemed a problem and policing user-generated content at such scale is a very hard problem.

Its not clear exactly how many thousands of content reviewers Google employs at this point weve asked and will update this post with any response.

Facebook recently added an additional 3,000 to its headcount, bringing the total number of reviewers to 7,500. CEO Mark Zuckerberg also wants to apply AI to the content identification issue but has previously said its unlikely to be able to do this successfully for many years.

Touching on what Google has been doing already to tackle extremist content, i.e. prior to these additional measures, Walker writes: We have thousands of people around the world who review and counter abuse of our platforms. Our engineers have developed technology to prevent re-uploads of known terrorist content using image-matching technology. We have invested in systems that use content-based signals to help identify new videos for removal. And we have developed partnerships with expert groups, counter-extremism agencies, and the other technology companies to help inform and strengthen our efforts.

See the article here:

Google to ramp up AI efforts to ID extremism on YouTube - TechCrunch

Related Posts