Section 230 heads to the Supreme Court – Columbia Journalism Review

Posted: October 8, 2022 at 3:23 pm

For the past several years, critics across the political spectrum have argued that Section 230 of the Communications Decency Act of 1996 gives social media platforms such as Facebook, Twitter, and YouTube too much protection from legal liability for the content they host. Conservative critics argue, despite a lack of evidence, that Section 230 allows social media companies to censor like-minded thinkers and groups without recourse, and liberal critics say the platforms use Section 230 as an excuse not to remove things they should be taking down, such as misinformation and hate speech. Before the 2020 election, Joe Biden said he would abolish Section 230 if he became president; since taking office, he has made similar statements, including that the clause should be revoked immediately.

This week, the Supreme Court announced that it would hear two cases that are looking to chip away at Section 230 legal protections. At the core of one case is the claim that Googles YouTube service violated the federal Anti-Terrorism Act by recommending videos featuring the isis terrorist group, and that these videos helped lead to the death of Nohemi Gonzalez, a twenty-three-year-old US citizen who was killed in an isis attack in Paris in 2015. In the lawsuit, filed in 2016, Gonzalezs family claims that while Section 230 protects YouTube from liability for hosting such content, it doesnt protect the company from liability for promoting that content with its algorithms. The second case involves Twitter, which was also sued for violating the Anti-Terrorism Act; the family of Nawras Alassaf claimed isis-related content on Twitter contributed to his death in a terrorist attack in 2017.

In recent years, the Supreme Court has declined to hear similar casesincluding, in March, a decision by a lower court that found Facebook was not liable for helping a man traffic a woman for sex. While Justice Clarence Thomas agreed with the decision not to hear that case, he also wrote that the court should consider the issue of the proper scope of immunity under Section 230. Assuming Congress does not step in to clarify Section 230s scope, we should do so in an appropriate case, Thomas wrote. It is hard to see why the protection that Section 230 grants publishers against being held strictly liable for third parties content should protect Facebook from liability for its own acts and omissions.

Thomas has made similar comments in a number of other decisions. In 2020, the Supreme Court declined to hear a case in which Enigma Software argued that MalwareBytes, an internet security company, should be liable for calling Enigmas products malware. Although he agreed with that decision, Thomas went on at length about what he described as a movement to use Section 230 to confer sweeping immunity on some of the largest companies in the world. He also suggested he agreed with an opinion from a lower-court judge, in a case in which Facebook was sued for terrorist content. The opinion said it strains the English language to say that in targeting and recommending these writings to usersFacebook is acting as the publisher of information provided by another information content provider.'

Jeff Kosseff, a cybersecurity law professor at the US Naval Academy and the author of a book on Section 230, told the Washington Post that, with the Supreme Court considering these questions, the entire scope of Section 230 could be at stake. The Post also noted that it will be the first time the court has considered whether there is a distinction between content that is hosted and content recommended by algorithms. Eric Goldman, co-director of the High Tech Law Institute at Santa Clara University, told the Post that such a division is actually a false dichotomy, and that the process of recommending content is one of the traditional editorial functions of a social media network. In that sense, he told the Post, the question presented goes to the very heart of Section 230.

While Section 230 gets most of the attention, it isnt the only protection the platforms have. A feature on hate speech in the New York Times described Section 230 as the main reason why such speech exists online, but later added a correction clarifying that the First Amendment also protects online speech. Even if the Supreme Court decides Section 230 doesnt protect the platforms when it comes to terrorist content, Facebook and Twitter could argue with some justification that the First Amendment does. To the extent that people want to force social media companies to leave certain speech up, or to boost certain content, or ensure any individuals continuing access to a platform, their problem isnt Section 230, Mary Anne Franks, a professor of law at the University of Miami, said during a discussion of Section 230 on CJRs Galley platform last year. Its the First Amendment.

This argument is at the heart of another case the Supreme Court was recently asked to hear, involving a Florida law designed to control how the platforms moderate content. The law was struck down by the Eleventh Circuit Court of Appeals in May as unconstitutional, since, the court ruled, moderation decisions are an exercise of the platforms First Amendment rights. A similar law passed in Texas, however, was upheld in a decision earlier this month, one that explicitly rejected the First Amendment defense. Now the Supreme Court has the opportunity to decide the extent to which Section 230 and the First Amendment cover the platforms moderation and content choices.

Heres more on Section 230:

Other notable stories:

TOP IMAGE: A general view of the U.S. Supreme Court, in Washington, D.C., on Wednesday, September 21, 2022. (Graeme Sloan/Sipa USA)(Sipa via AP Images)

Read the rest here:
Section 230 heads to the Supreme Court - Columbia Journalism Review

Related Posts