Why Moderating Content Actually Does More To Support The Principles Of Free Speech – Techdirt

from the back-to-basics dept

Obviously over the past few years theres been all of these debates about the content moderation practices of various websites. Weve written about it a ton, including in our Content Moderation Case Study series (currently on hiatus, but hopefully back soon). The goal of that series was to demonstrate that content moderation is rarely (if ever) about censoring speech, and almost always about dealing with extremely challenging decisions that any website has to deal with if they host content from users. Some of that involves legal requirements, some of it involves trying to keep a community focused, some of it involves dealing with spam, and some of it involves just crazy difficult decisions about what kind of community you want.

And yet, there are still those who insist that any forms of content moderation are either censorship or somehow against the principles of free speech. Thats the line we keep hearing. Last week in the discussion regarding Elon Musks poll about whether or not Twitter supported free speech, people kept telling me that the key point was about the principles of free speech, rather than what the law says. This discussion also came up recently with regards to the various discussions on cancel culture.

I understand where this impulse comes from because I had it in the past myself. Over a decade ago I was invited to give a talk to policy people running one of the large user-generated content platforms, and it was chock full of former ACLU/free speech lawyers. And I remember one of them asking me if I had thoughts on when it would be okay for them to remove content. I started to say that it should be avoided at almost all costs when they began tossing out example after example that began to make me realize that never is not an answer that works here. I still recommend listening to a Radiolab episode from a few years ago that does an amazing job laying out the impossible choices when it comes to content moderation. It highlights how not only is never not a reasonable option, but how no matter what rules you set, you will be faced with an unfathomable number of cases where the right answer or the right way to apply a policy is not at all clear.

Lawyer Akiva Cohen recently had a really worthwhile thread that explains why the entire concept of a philosophical commitment to free speech is somewhat meaningless if you think its distinct from government consequence. The key point that he makes is that once you separate the principles or the philosophy of free speech from legal consequences, youre simply down to debating competing speech and associations:

I think thats exactly correct, and why I keep pointing out that so much of the talk about cancel culture is often really about people who want to be free from the social consequences of critics speech. And the issue with content moderation is that its people wishing to be free from the social consequences of others association choices. A key part of actual free speech includes the right to associate or not to associate. And compelled speech goes against that.

But I want to take this argument even further, because it seems like many people believe that even if you recognize that concept, content moderation is somehow inherently incompatible with support for free speech. And I can understand the first order thinking that gets you there: content moderation involves taking down or otherwise restricting some speech, and so that automatically feels like it must go against free speech.

But the reality is a lot more nuanced, to the point that content moderation clearly actually enables more free speech. First, lets look at the world without any content moderation. A website that has no content moderation but allows anyone to post will fill up with spam. Even this tiny website gets thousands of spam comments a day. Most of them are (thankfully) caught by the layers upon layers of filtering tools weve set up.

Would anyone argue that it is against the principles of free speech to filter spam? I would hope not.

But once youve admitted that its okay to filter spam, youve already admitted that content moderation is okay youre just haggling over how much and where to draw the lines.

And, really, the spam example is instructive in many ways. People recognize that if a website is overrun with spam, its actually detrimental for speech overall, because how can anyone communicate when all of the communication is interrupted or hard to find due to spam?

So moderating spam seems to quite clearly enable more free speech by making platforms for speech more usable. Without such moderation, the platforms would get less use and people would be less likely to be able to speak in the same manner.

Now lets expand that circle out as well. Theres increasing evidence that when you have a totally freeform venue for free speech, it makes many people hold back and not join in. For all the talk of cancel culture that relies on claims that people are somehow afraid to speak their minds, they should maybe consider that the problem might not be cancel culture, but that some people dont want to have to constantly debate their beliefs with every rando who challenges them.

In other words, a full open forum is not all that conducive to free speech either, because its too much.

Instead, what content moderation does is create spaces where more people can feel free to talk. It creates different communities which arent just an open free for all, but are more focused and targeted. This actually ties back into the Section 230 debate as well. As the authors of Section 230 have explained, when they wrote that TheInternetand otherinteractive computer servicesoffer a forum for a true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity they did not mean that every website should host all of that content itself, but rather that by enabling content moderation, distinct and diverse communities could form. As they explained:

In our view as the laws authors, this requires that government allow a thousand flowers to bloomnot that a single website has to represent every conceivable point of view. The reason that Section 230 does not require political neutrality, and was never intended to do so, is that it would enforce homogeneity: every website would have the same neutral point of view. This is the opposite of true diversity.

To use an obvious example, neither the Democratic National Committee nor the Republican National Committee websites would pass a political neutrality test. Government-compelled speech is not the way to ensure diverse viewpoints. Permitting websites to choose their own viewpoints is.

Section 230 is agnostic about what point of view, if any, a website chooses to adopt; but Section 230 is not the source of legal protection for platforms that wish to express a point of view. Online platforms, no less than offline publishers, have a First Amendment right to express their opinion. When a website expresses its own opinion, it is, with respect to that expression, a content creator and, under Section 230, not protected against liability for that content.

In other words, the concept of free speech should support a diversity of communities not all speech on every community (or any particular community). And content moderation is what makes that possible.

The internet itself is an incredible platform for free speech, and we should be fighting to keep that wider internet open and free from too much regulatory burden and limits. But part of the reason the internet is such an incredible platform is that on the internet, anyone is able to find different communities that they feel are appropriate for them. Or to create their own without first having to get permission.

The people who demand that someone elses community must conform to their standards arent supporting principles of free speech, theyre demanding others bend to their wills.

And if thats the case, its going to end up shutting down a lot of speech. This is where the association part of free speech comes in. I dont want to host a ton of spam on Techdirt so I filter it. If I were required to host all that spam and not moderate it, I would shut down our comments, because otherwise theyd be useless.

Similarly, if we force websites to host all content in the name of free speech those websites are much less likely to want to continue offering that service to the public. Because now theyre offering something different than what they wanted to offer, and now they have to deal with spam, abuse, harassment and other nonsense that is driving away many of their other users.

The end result then, is that you get fewer places for speech, rather than more. And that is an attack on the principles of free speech.

None of this means that there arent reasons to criticize particular moderation policies or decisions. But we can debate them based on the specifics: why this policy or that decision may be problematic for reasons x, y, and z. But to simply state that those policies are against free speech is meaningless.

Filed Under: cancel culture, content moderation, free speech, principles of free speech, section 230

More:

Why Moderating Content Actually Does More To Support The Principles Of Free Speech - Techdirt

Related Posts
This entry was posted in $1$s. Bookmark the permalink.