Danielle Citron, a professor at the Boston University School of Law and a 2019 MacArthur Fellow, has argued that the immunity afforded by Section 230 is too broad. In a recent article for the Michigan Law Review, she writes that the law would apply even to platforms that have urged users to engage in tortious and illegal activity or designed their sites to enhance the reach of such activities. In 2017, Citron and Benjamin Wittes, a legal scholar and the editor-in-chief of the Lawfare blog, argued that a better version of the law would grant a platform immunity only if it had taken reasonable steps to prevent or address unlawful uses of its services. A reasonableness standard, they note, would allow for different companies to take different approaches, and for those approaches to evolve as technology changes.
Its possible to keep Section 230 in place while carving out exceptions to it, but at the cost of significant legal complexity. In 2018, Congress passed the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA), a bill intended to curtail sex trafficking. Under FOSTA, Internet platforms no longer receive immunity for civil and criminal charges of sex trafficking, and posts that might promote and facilitate prostitution no longer enjoy a liability shield. Kosseff, testifying before a House subcommittee, acknowledged the gravity and urgency of the sex-trafficking issue but cautioned that there were strong arguments against the bill. Rather than allowing states to get around Section 230s immunity shielda move that could force platforms to comply with many different state laws concerning sex trafficking and prostitutionKosseff suggested that Congress enhance the federal criminal laws on sex trafficking, to which platforms are already subject. Two years in, its not clear that FOSTA has had any material effect on sex trafficking; meanwhile, sex workers and advocates say that, by pushing them off of mainstream platforms, the legislation has made their work markedly more dangerous. After the law was passed, Craigslist removed its personals section. Any tool or service can be misused, a banner on the site read. We cant take such risk without jeopardizing all our other services.
There is a strong case for keeping Section 230s protections as they are. The Electronic Frontier Foundation, a digital-civil-liberties nonprofit, frames Section 230 as one of the most valuable tools for protecting freedom of expression and innovation on the Internet. Kosseff, with some reservations, comes to a similar conclusion. Section 230 has become so intertwined with our fundamental conceptions of the Internet that any wholesale reductions to the immunity it offers could irreparably destroy the free speech that has shaped our society in the twenty-first century, he writes. He compares Section 230 to the foundation of a house: the modern Internet isnt the nicest house on the block, but its the house where we all live, and its too late to rebuild it from the ground up. Some legal scholars argue that repealing or altering the law could create an even smaller landscape of Internet companies. Without Section 230, only platforms with the resources for constant litigation would survive; even there, user-generated content would be heavily restricted in service of diminished liability. Social-media startups might fade away, along with niche political sites, birding message boards, classifieds, restaurant reviews, support-group forums, and comments sections. In their place would be a desiccated, sanitized, corporate Internetless like an electronic frontier than a well-patrolled office park.
The house built atop Section 230 is distinctive. Its furnished with terms-of-service agreements, community-standards documents, and content guidelinesthe artifacts through which platforms express their rules about speech. The rules vary from company to company, often developing on the tailwinds of technology and in the shadow of corporate culture. Twitter began as a service for trading errant thoughts and inanities within small communitiesBird chirps sound meaningless to us, but meaning is applied by other birds, Jack Dorsey, its C.E.O., once told the Los Angeles Timesand so, initially, its terms of service were sparse. The document, which was modelled off Flickrs terms, contained little guidance on content standards, save for one clause warning against abuse and another, under General Conditions, stating that Twitter was entitled to remove, at its discretion, anything it deemed unlawful, offensive, threatening, libelous, defamatory, obscene or otherwise objectionable.
In 2009, Twitters terms changed slightlyWhat you say on Twitter may be viewed all around the world instantly, a Clippy-esque annotation warnedand expanded to include a secondary document, the Twitter Rules. These rules, in turn, contained a new section on spam and abuse. At that point, apart from a clause addressing violence and threats, abuse referred mainly to misuse of Twitter: username sales, bulk creation of new accounts, automated replies, and the like. In her history of the Twitter Rules, the writer Sarah Jeong identifies the summer of 2013 as an inflection point: following several high-profile instances of abuse on the platformincluding a harassment campaign against the British politician Stella CreasyTwitter introduced a report abuse button and added language to the rules addressing targeted harassment. That November, the company went public. Changes in the Rules over time reflect the pragmatic reality of running a business, Jeong concludes. Twitter talked some big talk about free speech, she writes, but it ended up tweaking and changing the Rules around speech whenever something threatened its bottom line.
Under Section 230, content moderation is free to be idiosyncratic. Companies have their own ideas about right and wrong; some have flagship issues that have shaped their outlooks. In part because its users have pushed it to take a clear stance on anti-vaccination content, Pinterest has developed particularly strong policies on misinformation: the company now rejects pins from certain Web sites, blocks certain search terms, and digitally fingerprints anti-vaccination memes so that they can be identified and excluded from its service. Twitters challenge is bigger, however, because it is both all-encompassing and geopolitical. Twitter is a venue for self-promotion, social change, violence, bigotry, exploration, and education; it is a billboard, a rally, a bully pulpit, a networking event, a course catalogue, a front page, and a mirror. The Twitter Rules now include provisions on terrorism and violent extremism, suicide and self-harm. Distinct regulations address threats of violence, glorifications of violence, and hateful conduct toward people on the basis of gender identity, religious affiliation, age, disability, and caste, among other traits and classifications. The companys rules have a global reach: in Germany, for instance, Twitter must implement more aggressive filters and moderation, in order to comply with government laws banning neo-Nazi content.
In a 2018 article published in the Harvard Law Review, The New Governors: The People, Rules, and Processes Governing Online Speech, Kate Klonick, who is now a professor at St. Johns University Law School, tallies the sometimes conflicting factors that have shaped the moderation policies at Twitter, Facebook, and YouTube. The companies, she writes, have been influenced by a fundamental belief in American free speech norms, a sense of corporate responsibility, and user expectations. Theyve also reacted to government requests, media scrutiny, pressure from users or public figures, and the demands of third-party civil-society groups, such as the Anti-Defamation League. They have sometimes instituted new rules in response to individual incidents. There are downsides to this kind of improvisational responsiveness: a lack of transparency and accountability creates conditions ripe for preferential treatment and double standards.
See the original post:
Trump, Twitter, Facebook, and the Future of Online Speech - The New Yorker