The Great Deplatforming: Can Digital Platforms Be Trusted As Guardians of Free Speech? – ProMarket

Posted: January 31, 2021 at 7:14 am

Online social media platforms accepted the role of moderating content from Congress in 1996. The Great Deplatforming that occurred after January 6 was less a silent coup than a good faith effort to purge online platforms of toxic content.

After former President Trump and many of the extremist followers he goaded were removed from a variety of online platformsmost notably Twitter, Facebook, YouTube, as well as Redditmany saw the subsequent silence as a welcome relief. But is that, as Luigi Zingales posits, an emotional reaction to a wrong that is being used to justify something that, at least in the long term, is much worse?

The Great Deplatforming (aka the Night of Short Fingers) has exposed the fact that a great deal of political discourse is occurring on private, for-profit internet platforms. Can these platforms be trusted as our guardians of free speech?

Recognizing that the Internet and other interactive computer services offer a forum for a true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity, and with the intent of preserving the vibrant and competitive free market that presently exists for the Internet and other interactive computer services, unfettered by Federal or State regulation, Congress granted immunity in 1996 to interactive computer services (which are equivalent of Twitter, Facebook, YouTube, and other online social media platforms) for any information published by the platforms users. Without this immunity, social media platforms would simply not exist. The potential liability for the larger platforms arising from the content of the millions upon millions of posts would be far too great a risk.

Congress also granted social media platforms immunity for any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected. This provision gives platforms an incentive to moderate content to weed out objectionable and illegal content. It can be a vile world out there, as evidenced by some of the hateful, harmful, and obscene comments that are posted in online forums. As noted by Cloudfare founder Matthew Prince, What we didnt anticipate, was that there are just truly awful human beings in the world. The situation has devolved to such an extent that experts now recognize that some human moderators suffer from post-traumatic stress disorder-like symptoms.

What do we do when the moderation we have encouraged social media platforms to conduct is applied to what some consider political speech? Many fear that platforms have difficulty differentiating between racist and/or extremist posts advocating violence against individuals or institutions based on political views, and simple political opinion. Indeed, Facebooks own executives acknowledge that the companys algorithm-generated recommendations were responsible for the growth of extremism on its platform. Concerns have also been raised that, with bad intent, some platforms refuse to make that differentiation to promote their executives own political agendas. Social media platforms have also been accused of moderating such political speech with a bias toward a particular party.

One approach is to amend Section 230 of the Communications Decency Act, the law that provides internet social media platforms their immunity. Republicans introduced five bills in the 20192020 Congressional session calling for amendments, as well as the full repeal, of Section 230. For example, Senator Josh Hawley (R-Mo), claiming a lack of politically neutral content on social media platforms, sponsored the Ending Support for Internet Censorship Act. Under the bill as introduced, online social media platforms with 30 million or more active monthly users in the US (or 300 million or more active monthly users worldwide, or more than $500 million in global annual revenue) would have to obtain an immunity certification from the FTC every two years. The social media platform would be denied a certification and lose Section 230 immunity if it was determined to be moderating in a politically biased manner, which would include disproportionately restricting or promoting access to, or the availability of, information from a political party, political candidate, or political viewpoint. As one article noted, Hawley wants to stop internet censorship by censoring the internet, not to mention regulating political speech.

The Parler lawsuit amplifies a point made by one commentator: If you are so toxic that companies dont want to do business with you, thats on you. Not them.

In May 2020, President Trump signed an Executive Order claiming online platforms are invoking inconsistent, irrational, and groundless justifications to censor or otherwise restrict Americans speech, and stating that online platforms should lose their immunity, because rather than removing objectionable content in good faith as required under the law, they are engaging in deceptive actions by stifling viewpoints with which they disagree. The Executive Order called on the FCC to propose new regulations to clarify immunity under Section 230 and for the FTC to investigate online platforms for deceptive acts. Both Tim Wu and Hal Singer have provided cogent arguments on ProMarket as to why these are doomed approaches.

Online moderation is enforced through each platforms terms of service (TOS) that each user must agree to before being able to post to each respective platform. After the January 6 siege of the US Capital, the tech companies that deplatformed a large number of users, including former President Trump, and deleted tens of thousands of posts, did so on the basis of users violating their respective TOS. For example, Amazon Web Services (AWS) stopped hosting Parler because of Parlers alleged violations of the AWS TOS, which include an Acceptable Use Policy. Parlers subsequent lawsuit against AWS could have served as a bellwether case for the application of TOS to regulate speech. Unfortunately, the case Parler presented is appallingly weak (one lawyer referred to it as a lolsuit). For example, the AWSParler hosting agreement clearly gives AWS the power to immediately suspend and terminate a client that has violated AWSs Acceptable Use Policy.

In its swift denial of a preliminary injunction in this case, the court also noted the lack of any evidence that Twitter and AWS acted together either intentionally or at all to restrain Parlers business.

Parlers causes of action continue with a claim that Twitter was given preferential treatment by AWS because similar content appeared on Twitter, for which Twitters account was not suspended, while Parlers account was terminated. There are two issues with this assertion: First, we return to the role of moderation. Twitter actively (though, to some degree, imperfectly) moderates the content on its platform. In contrast, Parlers home page stated its users could [s]peak freely and express yourself openly, without fear of being deplatformed for your views.

Even more damaging to Parlers assertion is the fact that the evidence in the case demonstrates that AWS doesnt even host Twitter on its platform and therefore did not have any ability to suspend Twitters account even if it wanted to. But the Parler lawsuit amplifies a point made by one commentator: If you are so toxic that companies dont want to do business with you, thats on you. Not them (which would seem to apply to Zingaless justification for Simon & Schusters cancellation of Senator Josh Hawleys book deal).

In denying Parlers request for a preliminary injunction that would order AWS to restore service to Parler pending the full hearing, the court rejected any suggestion that the public interest favors requiring AWS to host the incendiary speech that the record shows some of Parlers users have engaged in. While this ruling does not end the case, it does substantiate the weakness of at least some of Parlers arguments.

The corporate owners of the social media platforms are permitted in our free enterprise system to set the terms under which content may be posted or removed from the platforms.

Although the AWS-Parler case before the court will ultimately resolve this particular dispute, the underlying issues will remain far from resolved regardless of the outcome of the case. The explosion of social media over the last ten years, and its supplanting of traditional media to a large degree, has created a new and untested playing field for public discourse. Some of the issues raised are similar in scope, if not size, to the issues our courts have dealt with in the past. The corporate owners of the social media platforms are permitted in our free enterprise system to set the terms under which content may be posted or removed from the platforms. As non-government actors, the First Amendments freedom of speech protections do not apply to the speech of a private company, a fact confirmed by the court in denying Parlers preliminary injunction. The audience of the social media platforms at issue, however, has grown to exponentially. While deplatforming will, at least temporarily, silence those voices promoting violence on specific platforms, the long-lasting implications are less clear.

While it is true that Trump lost the popular vote in the 2020 election by 7 million votes, the fact remains that 74.2 million Americans did cast their votes for Trump. Once Twitter permanently banned Trump from its platform, and many surmised that he would join the less restrictive Parler, nearly one million people downloaded the Parler app from Apple and Google before it was removed from those stores and Parler was suspended from AWS. Moving the conversation off of mainstream social media and driving it into less balanced platforms whose subscribers are more homogeneous, much to the dismay of even Parlers CEO, John Matze, encourages an echo chamber of ideas in smaller encrypted platforms that are more difficult to monitor and potentially amplifies the most angry and passionate voices.

If the touted exodus of conservatives from Twitter to other platforms that they view as more welcoming comes to fruition, could our public discourse become even more divided, with opposing viewpoints feeding upon their own biases rather than potentially being tempered by responses and dialogue with each other? An informed public exposed to conflicting opinions is the best chance for resolving political differences. These issues are important and warrant further discussion, but the current termination of Parler from the AWS platform does not seem to result in heightened concern that public discourse is truly harmed. There are alternatives to AWS, and in fact, Parler already seems to have found one that will bring it back. While there is a significant amount of talk of conservatives leaving Twitter, there seems to be little evidence that has happened on a large scale. For the moment, at least, the largest social media platforms seem to be retaining users on both sides of the political spectrum, which we believe is good for democracy.

Go here to read the rest:
The Great Deplatforming: Can Digital Platforms Be Trusted As Guardians of Free Speech? - ProMarket

Related Posts