Is YouTube Violating the First Amendment by Removing Anti-Vaccine Videos? – News@Northeastern

As part of a new set of policies designed to cut down on anti-vaccine content and health misinformation, YouTube is starting to ban any videos that claim commonly used vaccines approved by health authorities are ineffective or dangerous.

The video sharing platform, and others, including Facebook and Twitter, had already banned misinformation related to the COVID-19 vaccines. This takes the crackdown one step further, with YouTube taking down anti-vaccine posts as well as the accounts of people who spread false information about other vaccines.

Critics and people who have propagated vaccine misinformation on the social media platform immediately decried the move as a violation of their First Amendment protection of free speecha fundamental misunderstanding of free speech protections, says Claudia Haupt, associate professor of law and political science at Northeastern.

The First Amendment, which protects free speech in the U.S., applies to government censorship of protected speech, but not to private companies such as YouTube, Facebook, or Twitter.

But just because the First Amendment doesnt apply here, doesnt mean that there arent tricky questions for platforms deciding which posts stay and which are taken down, Haupt says.

Claudia Haupt, associate professor of law and political science. Photo by Matthew Modoono/Northeastern University

Does this move make sense, as a way to curb vaccine misinformation?

If I understand it correctly, Facebook and Twitter had already banned vaccine misinformation, and YouTube was the last large platform to do so. Its not surprisingif you think about the way that content gets shared across those platforms, it doesnt really help just to target one of them. If youre concerned about misinformation, you would want to look at the entire ecosystem of all social media platforms.

Do people who share anti-vaccine rhetoric on social media platforms have a First Amendment right to do so?

We have to start from the premise that no one has a First Amendment right to post on those platforms. Theres no First Amendment right to be on the platform, and the companies arent required to engage in content-neutral moderation decisions; they can exclude certain viewpoints.

But just because the First Amendment doesnt apply here, doesnt mean that there arent tricky questions: Even if youre a private platform who can moderate independent of the First Amendment, you have to make a decision about what are your guiding principles for including or excluding certain messages. So, for example, you could say, Im going with the medical consensus around vaccines, and Im going to exclude all of the messages about vaccines that directly contradict all of the medical communitys understanding of how vaccines work.

You can see that in the link people have made about the childhood measles, mumps, and rubella vaccine and autismits an idea thats been refuted, its just inaccurate as a matter of science. So, you could exclude all the statements that pertain to that, and set the bar according to what the medical community says. You could still, though, decide to permit people to share stories about bad things that have happened to them, because theyre not making a medical claim or giving advice, theyre just telling a story about what happened in their lives. Theres no direct link between what they say and telling people to do that.

But again, all this is independent of the First Amendment because these are private companies.

In that case, how do companies decide whats in and whats out?

In this context with vaccines, on the one hand you have expertise in a medical community that we recognize as the authority on that question, and on the other, we know that there can be huge amounts of harm that can be conflicted by bad information or bad advice.

You could imagine closer cases where its harder to decide what the standard is, but with medical information, we have a scientific standard to go by.

But there are also instances where we have contested science. In the beginning of the pandemic, we had the problem that giving advice was really hard because the medical community was figuring things out as the virus spread. There, it would be really difficultand really problematicfor private companies to decide that some things are good advice and some things are bad advice.

The platform has to pick whose expertise, whose assessment to follow. And this comes up in malpractice all the time: If you go to the doctor and get bad advice, the standard that its judged by is the community of medical professionals. I think it makes sense to also use that as a baseline for speech if its framed as giving advice.

So often, as we can see here, these decisions boil down into a black-and-white conversation: Either I have free speech or Im being censored. Is there a better conversation we could be having?

With these platforms, I have a right to say something is the reflexive cultural posture we have because were so used to talking about rights and the First Amendment. But legally, that doesnt even apply in this space.

Generally, one way I think we should think about it is to weigh speech as one variable, harm as another, and expertise as a third. So, its not just my right to speak against your right to speak, its more about what does the speech do? Whats the level of harm it may cause? Is there something in the content that can be measured in terms of expertise?

For media inquiries, please contact Marirose Sartoretto at m.sartoretto@northeastern.edu or 617-373-5718.

See more here:

Is YouTube Violating the First Amendment by Removing Anti-Vaccine Videos? - News@Northeastern

Related Posts
This entry was posted in $1$s. Bookmark the permalink.