Facebook can’t solve its hate speech problem with automation – Popular Science

Posted: July 8, 2017 at 4:08 am

How, exactly, are people supposed to talk to each other online? For Facebook, this is as much an operational question as it is a philosophical one.

Last week, Facebook announced it has two billion users, which means roughly 27 percent of the worlds 7.5 billion people use the social media network. In a post at Facebooks Hard Questions blog, the company offered a look at the internal logic behind how the company manages hate speech, the day before ProPublica broke a story about apparently hypocritical ways in which those standards are applied. Taken together, they make Facebooks attempt to regulate speech look impossible.

Language is hard. AI trained on human language, for example, will replicate the same biases of the users, just by seeing how words are used in relation to each other. And the same word, in the same sentence, can mean different things depending on the identity of the speaker, the identity of the person to which its addressed, and even the manner of conversation. And thats not even considering the multiple definitions of a given word.

What does the statement 'burn flags not fags' mean?, writes Richard Allan, Facebooks VP of Public Policy for Europe, the Middle East, and Africa. While this is clearly a provocative statement on its face, should it be considered hate speech? For example, is it an attack on gay people, or an attempt to 'reclaim' the slur? Is it an incitement of political protest through flag burning? Or, if the speaker or audience is British, is it an effort to discourage people from smoking cigarettes (fag being a common British term for cigarette)? To know whether its a hate speech violation, more context is needed.

Reached for comment, a Facebook spokesperson confirmed that the Hard Questions post wasnt representative of any new policy. Instead, its simply transparency into the logic of how Facebook polices speech.

People want certain things taken down, they want the right to say things, says Kate Klonick, a resident fellow at the Information Society Project at Yale, they want there to be a perfect filter that takes down the things that are hate speech or racist or sexist or hugely offensive.

One reason that Facebook may be parsing how it regulates speech in public is that, thanks to a trove of internal documents leaked to the Guardian, others are reporting on how Facebooks internal guidance for what speech to take down and what speech to leave up.

"According to one document, migrants can be referred to as 'filthy' but not called filth,'" reports ProPublica, "They cannot be likened to filth or disease 'when the comparison is in the noun form,' the document explains."

Klonick studies how Facebook governs its users, and while the kinds off moderation discussed in the Hard Questions post arent new, the transparency is. Says Klonick, "It's not secret anymore that this happens and that your voice is being moderated, your feed is being moderated behind the scenes."

To Klonicks eye, by starting to disclose more of what goes on in the sausage factory, Facebook is trying to preempt criticism of how, exactly, Facebook chooses to moderate speech.

Theres nothing, though, that says Facebook has to regulate all the speech it does, beyond what's required by the law in the countries where Facebook operates. Several examples in the Hard Questions post hinge on context: Is the person reclaiming a former slur, or is it a joke among friends or an attack by a stranger against a member of a protected group? But what happens when war suddenly changes a term from casual use to something reported as hate speech?

One example from Hard Questions is how Facebook choose to handle the word "moskal," a Ukranian slang term for Russians, and "khokhol," a Russian slang term for Ukrainians. When a conflict between Russia and Ukraine broke out in 2014, people in both countries started reporting the terms used by the opposing side as hate speech. In response, says Allan, "We did an internal review and concluded that they were right. We began taking both terms down, a decision that was initially unpopular on both sides because it seemed restrictive, but in the context of the conflict felt important to us."

One common use of reporting features on websites is for people to simply report others with whom they disagree, invoking the ability of the site to censor their ideological foes. With the conversion of regular language to slurs in the midst of a war, Facebook appears to have chosen to try and calm tensions itself, by removing posts with the offending words.

"I thought that example was really interesting because he says explicitly that the decision to censor those words was unpopular on both sides," says Jillian York, the EFF's Director for International Freedom of Expression. "Thats very much a value judgement. Its not saying 'people were killing themselves because of this term, and so were protecting ourselves from liability;' which is one thing that they do, one thats a little more understandable. This is Facebook saying, 'the people didnt want this, but we decided it was right for them anyway.'"

And while Facebook ultimately sets policy about what to take down and what to leave up, the work of moderation is done by people, and like with Facebooks moderation of video, this work will continue to be done by people for the foreseeable future.

"People think that its easy to automate this, and I think that that blogpost is why its so difficult right now, how far we are from automating it," says Klonick. "Those are difficult human judgements to make, were years away from that. These types of examples that Richard Allen talked about in his blog post are exactly why were so far from automating this process."

Again, Facebook is deciding the rules and standards for speech for over a quarter of the worlds population, something few governments in history have ever come close to or exceeded. (Ancient Persia is a rare exception). With the enormity of the task, its worth looking at not just how Facebook chooses to regulate speech, but why it chooses to do so.

"On scale, moderating content for 2 billion people is impossible," says York, "so why choose to be restrictive beyond the law? Why is Facebook trying to be the worlds regulator?"

Read more here:

Facebook can't solve its hate speech problem with automation - Popular Science

Related Posts