Facebook, Twitter, and social media vs. the world – The Verge

Its a strange fact of history that critics began calling on Twitter to ban Donald Trump shortly after he was elected president and got what they wanted a few days before the end of his term. For four years, we talked about a ban why it was necessary, why it was impossible; how refusing showed that platforms were principled, how it showed they were hypocrites and just when there was nothing left to say, it happened. There were detailed justifications for the ban when it came (it was a state of emergency, the best among bad options, and so on), but the timing suggests a simpler logic. As long as Trump was president, platforms couldnt punish him. Once hed lost the election, he was fair game.

Twitters decision to ban Trump had a cascading effect: Facebook issued its own ban, then YouTube, then everyone else. The Trump-friendly social network Parler came under scrutiny, and the platforms host, Amazon Web Services, took a closer look at the violent threats that had spread on Parler in advance of the Capitol riot, ultimately deciding to drop the network entirely. Trump boasted about starting his own social network only to offer a short-lived series of online press releases. Journalists who shared the releases too eagerly were shamed for helping the disgraced president evade the ban, and social pressures made the screenshots less common. Soon, even Facebook ads showing Trump speeches drew criticism as a potential evasion of the ban.

The months since have perfectly illustrated the effectiveness of deplatforming. Once inescapable, the former president has all but disappeared from the daily discourse. He continues to hold rallies and make statements, but the only way to hear about them is to go to a rally in person or tune into fringe networks like OANN or NewsMax. Before the ban, there was real doubt about whether deplatforming a major political figure could work. After the ban, it is undeniable.

For the most part, platforms have avoided meaningful blowback for the decision, although theres been growing angst about it from the American right wing. If they can do this to Trump, the thinking goes, they can do it to anyone. Its entirely true. This is the dream of equal justice under the law: anyone who commits murder should be worried that theyll go to jail for it. There should be no one so powerful that they cant be kicked out of a restaurant if they start spitting in other peoples food. In this one difficult case, Twitter was able to live up to the ideal of equal justice. But as we look to the next 10 years of speech moderation, its hard not to be anxious about whether platforms can keep it up.

We tend to talk about moderation politics as something that happens between platforms and users (i.e., who gets banned and why), but the Trump debacle shows theres another side. Like all companies, social platforms have to worry about the politics of the countries they operate in. If companies end up on the wrong side of those politics, they could face regulatory blowback or get ejected from the country entirely. But moderation is politically toxic: it never makes friends, only enemies, even as it profoundly influences the political conversation. Increasingly, platforms are arranging their moderation systems to minimize that political fallout above all else.

The problem is much bigger than just Twitter and Trump. In India, Facebook has spent the last seven years in an increasingly fraught relationship with Prime Minister Narendra Modi, cultivating close ties with the countrys leader while violence against Indias Muslim minority continued to escalate. In Myanmar, a February coup forced Facebook to welcome groups it had previously counted as terrorists and suppress groups that mounted military opposition to the new regime.

Not surprisingly, both countries have flirted with an outright ban on Facebook, flexing moderation systems of their own. Modi has spoken openly about a ban, and India has less to lose from a ban than Facebook. The platform would drop 260 million users overnight, and it wouldnt take long for markets and investors to realize the implications. So when a post pushes the limits of whats acceptable, Facebook will usually make exceptions.

The starkest example of this dynamic was revealed by the Facebook Papers in October. In Vietnam, the company faced growing pressure from the ruling Communist Party to moderate against anti-state content essentially building the repressive values of the regime into its own moderation strategy. But when the issue came to a head, Facebook CEO (and now Meta CEO) Mark Zuckerberg personally directed the company to comply, saying it was more important to ensure our services remain available for millions of people who rely on them every day. Given the choice of protecting the independence of its moderation system or staying on the governments good side, Zuckerberg chose the easy way out.

There was a time when a country-wide blockade of Facebook would have been unthinkable. Civil society groups like Access Now have spent years trying to establish a norm against internet blackouts, arguing that they provide cover for human rights abuses. But Facebook is so toxic in US politics that its hard to imagine a president lobbying foreign countries on its behalf. When Myanmar instituted a temporary block in the wake of the countrys military coup, there were few objections.

These are ugly, difficult political shifts, and Facebook is playing an active role in them, just as much as national institutions like the press or the national guard. Facebook isnt pretending to be a neutral arbiter anymore, and for all the posturing of Facebooks Oversight Board a pseudo-independent body with authority over major moderation decisions there isnt any greater noble logic to the platforms choices. Theyre just trying to stay on the right side of the ruling party.

This kind of realpolitik isnt what deplatformers had in mind. The goal was to push Facebook and the others to take responsibility for their impact on the world. But instead of making Facebook and the other platforms more responsible, it has made them more unapologetic about the political realities. These are just corporations protecting themselves. Theres no longer any reason to pretend otherwise.

We often talk about tech companies as if theyre unprecedented, but the world has grappled with this kind of transnational corporate power before. If you want to stop Coca-Cola or United Fruit Company from killing union leaders, its not enough to pass laws in the US. You need an international standard of conduct, reaching beyond nation-specific concepts like probable cause or the first amendment.

For decades, a constellation of international activists has been building such a system, a body of voluntary transnational agreements generally referred to as international human rights law. The name is misleading in some ways since its less of a judicial system than a series of non-binding treaties agreeing to general principles: countries shouldnt discriminate on the basis of race or gender, they shouldnt use children as soldiers, they shouldnt torture people.

The language of the treaties is purposefully vague, and enforcement mostly consists of public shaming. (The 1987 Convention Against Torture didnt prevent the United States from embracing enhanced interrogation techniques, for example.) But you can see the beginnings of an international consensus there, nudging us towards a less oppressive and violent world.

For the more thoughtful critics of social media, this is the only system broad enough to truly rein in a company like Facebook. Jillian York, who dwells on the Facebook problem at length in her book Silicon Values, told me the only long-term fix to the troubles roiling India and the United States would be something on that scale. We need to be thinking about an international mechanism for holding these companies accountable to a standard, she told me.

Optimists might see the shift towards deplatforming and away from free speech extremism as a step in the right direction. Reddit-style speech libertarianism is very much an American concept, relying on the relatively unusual protections of the First Amendment. But rather than drifting towards an international consensus, York sees platforms as simply cut adrift, doing whatever fits the needs of their employees and users at a given moment. In this dispensation, there are few principles anchoring companies like Facebook and Twitter and few protections if they run astray.

Were now in a phase where theyre acting of their own accord, York says. I dont think the current scenario is workable for much longer. I dont think people will put up with it.

Our best glimpse of the post-deplatforming future has been Facebooks Oversight Board, which has done its best to square the realities of a platform with some kind of higher speech principles. Its the kind of notice-and-appeal system that advocates have been asking platforms to adopt for years. Faced with a never-ending stream of hard choices, Facebook put tens of millions of dollars into building a master moderator that everyone can trust. For all the systems flaws, its the best anyones been able to do.

In practice, most of the Oversight Board rulings trace the line where differences of opinion give way to political violence. Of the 18 decisions from the board so far, 13 are directly related to racial or sectarian conflicts, whether dealing with Kurdish separatists, anti-Chinese sentiment in Myanmar, or a jokey meme about the Armenian genocide. The specifics of the ruling might be about a particular Russian term for Azerbaijanis, but the potential for mass oppression and genocide looms in the background of each one. Taking on the work of moderating Facebook, the Oversight Board has ended up as the arbiter of how much racism is acceptable in conflicts all around the world.

But for all the boards public deliberations, it hasnt changed the basic problem of platform politics. Whenever the Oversight Boards fragile principles for online speech conflict with Facebooks corporate self-interest, the oversight board loses out. The most egregious example so far is Facebooks Crosscheck system that resulted in leniency high-profile accounts, which the Oversight Board had to find out about from The Wall Street Journal. But even as the company sidesteps its own panel of experts, Facebook can retreat into platitudes about the free exchange of opinions, as if every choice was being guided by a higher set of principles.

Weve been using freedom of opinion to sidestep this mess for a very long time. Jean-Paul Sartre described a version of the same pattern in his 1946 work Anti-Semite and Jew, writing just after the Allied liberation of Paris. In the opening lines of the essay, he marvels at how often the blood-soaked rhetoric of the Nazis was minimized as simply antisemitic opinion:

This word opinion makes us stop and think. It is the word a hostess uses to bring to an end a discussion that threatens to become acrimonious. It suggests that all points of view are equal; it reassures us, for it gives an inoffensive appearance to ideas by reducing them to the level of tastes. All tastes are natural; all opinions are permitted In the name of freedom of opinion, the anti-semite asserts the right to preach the anti-Jewish crusade everywhere.

This is the dream that tech companies are only now waking from. Companies like Facebook play the role of the hostess hoping for discussion that is lively enough to keep us in the room but not so heated that it will damage the furniture. But we can no longer pretend these opinions are safely cordoned off from the world. They are a part of the same power struggles that shape every other political arena. Worse, they are subject to the same dangers. We can only hope that, over the next 10 years, platforms find a better way to grapple with them.

Read more:

Facebook, Twitter, and social media vs. the world - The Verge

Related Posts
This entry was posted in $1$s. Bookmark the permalink.