Trump Executive Order Misreads Key Law Promoting Free Expression Online and Violates the First Amendment – EFF

This post based its initial analysis on a draft Executive Order. It has been updated to reflect the final order, available here.

President Trumps Executive Order targeting social media companies is an assault on free expression online and a transparent attempt to retaliate against Twitter for its decision to curate (well, really just to fact-check) his posts and deter everyone else from taking similar steps. The good news is that, assuming the final order looks like the draft we reviewed on Wednesday, it wont survive judicial scrutiny. To see why, lets take a deeper look at its incorrect reading of Section 230 (47 U.S.C. 230) and how the order violates the First Amendment.

The main thrust of the order is to attack Section 230, the law that underlies the structure of our modern Internet and allows online services to host diverse forums for users speech. These platforms are currently the primary way that the majority of people express themselves online. To ensure that companies remain able to let other people express themselves online, Section 230 grants online intermediaries broad immunity from liability arising from publishing anothers speech. It contains two separate and independent protections.

Subsection (c)(1) shields from liability all traditional publication decisions related to content created by others, including editing, and decisions to publish or not publish. It protects online platforms from liability for hosting user-generated content that others claim is unlawful. For example, if Alice has a blog on WordPress, and Bob accuses Clyde of having said something terrible in the blogs comments, Section 230(c)(1) ensures that neither Alice nor WordPress are liable for Bobs statements about Clyde. The subsection also would also protect Alice and WordPress from claims from Bob for Clyde's comment even if Alice removed Bob's comment.

Subsection (c)(2) is an additional and independent protection from legal challenges brought by users when platforms decide to edit or to not publish material they deem to be obscene or otherwise objectionable. Unlike (c)(1), (c)(2) requires that the decision be in good faith. In the context of the above example, (c)(2) would protect Alice and WordPress when Alice decides to remove a term within the comment from Clyde that she considers to be offensive. Clyde cannot successfully sue Alice for that editorial action as long as Alice acted in good faith.

The legal protections in subsections (c)(1) and (c)(2) are completely independent of one another. There is no basis in the language of Section 230 to qualify (c)(1)s immunity on platforms obtaining immunity under (c)(2). And courts, including the U.S. Court of Appeals for the Ninth Circuit, have correctly interpreted the provisions as distinct and independent liability shields:

Subsection (c)(1), by itself, shields from liability all publication decisions, whether to edit, to remove, or to post, with respect to content generated entirely by third parties. Subsection (c)(2), for its part, provides an additional shield from liability, but only for any action voluntarily taken in good faith to restrict access to or availability of material that the provider ... considers to be obscene ... or otherwise objectionable.

Even though neither the statute nor court opinions that interpret it mush these two Section 230 provisions together, the order asks the Federal Communications Commission to start a rulemaking and consider linking the two provision's liability shields. The order asks the FCC to consider whether a finding that a platform failed to act in "good faith" under subsection (c)(2) also disqualifies the platform from claiming immunity under section (c)(1).

In short, the order tasks government agencies with defining good faith and eventually deciding whether any platforms decision to edit, remove, or otherwise moderate user-generated content meets it, upon pain of losing access to all of Section 230's protections.

Should the order result in FCC rules interpreting 230 that way, a platform's single act of editing user content that the government doesnt like could result in losing both kinds of protections under 230. This essentially will work as a trigger to remove Section 230s protections entirely from a host of anything that someone disagrees with. But the impact of that trigger would be much broader than simply being liable for the moderation activities purportedly done in bad faith: Once a platform was deemed not in good faith, it could lose (c)(1) immunity for all user-generated content, not just the triggering content. This could result in platforms being subjected to a torrent of private litigation for thousands of completely unrelated publication decisions.

Taking a step back, the order purports to give the Executive Branch and federal agencies powerful leverage to force platforms to publish what the government wants them to publish, on pain of losing Section 230s protections. But even if section 230 permitted this, and it doesnt, the First Amendment bars such intrusions on editorial and curatorial freedom.

The Supreme Court has consistently upheld the right of publishers to make these types of editorial decisions. While the order faults social media platforms for not being purely passive conduits of user speech, the Court derived the First Amendment right from that very feature.

In its 1974 decision in Miami Herald Co v. Tornillo, the Court explained:

A newspaper is more than a passive receptacle or conduit for news, comment, and advertising. The choice of material to go into a newspaper, and the decisions made as to limitations on the size and content of the paper, and treatment of public issues and public officials -- whether fair or unfair -- constitute the exercise of editorial control and judgment. It has yet to be demonstrated how governmental regulation of this crucial process can be exercised consistent with First Amendment guarantees of a free press as they have evolved to this time.

Courts have consistently applied this rule to social media platforms, including the 9th Circuits recent decision in Prager U v. Google and a decision yesterday by the U.S. Court of Appeals for the District of Columbia in a case brought by Freedom Watch and Laura Loomer against Google. In another case, a court ruled that when online platforms "select and arrange others materials, and add the all-important ordering that causes some materials to be displayed first and others last, they are engaging in fully protected First Amendment expressionthe presentation of an edited compilation of speech generated by other persons."

And just last term in Manhattan Community Access v. Halleck, the Supreme Court rejected the argument that hosting the speech of others negated these editorial freedoms. The court wrote, In short, merely hosting speech by others is not a traditional, exclusive public function and does not alone transform private entities into state actors subject to First Amendment constraints.

It went on to note that Benjamin Franklin did not have to operate his newspaper as a stagecoach, with seats for everyone, and that The Constitution does not disable private property owners and private lessees from exercising editorial discretion over speech and speakers on their property."

The Supreme Court also affirmed that these principles applied "Regardless of whether something 'is a forum more in a metaphysical than in a spatial or geographic sense.

EFF filed amicus briefs in Prager U and Manhattan Community Access, urging that very result. These cases thus foreclose the Presidents ability to intrude on platforms editorial decisions and to transform them into public forums akin to parks and sidewalks.

But even if the First Amendment were not implicated, the President cannot use an order to rewrite an act of Congress. In passing 230, Congress did not grant the Executive the ability to make rules for how the law should be interpreted or implemented. The order cannot abrogate power to the President that Congress has not given.

We should see this order in light of what prompted it: the Presidents personal disagreement with Twitters decisions to curate his own tweets. Thus despite the orders lofty praise for free and open debate on the Internet, this order is in no way based on a broader concern for freedom of speech and the press.

Indeed, this Administration has shown little regard, and much contempt, for freedom of speech and the press. Were skeptical that the order will actually advance the ideals of freedom of speech or be justly implemented.

There are legitimate concerns about the current state of online expression, including how a handful of powerful platforms have centralized user speech to the detriment of competition in the market for online services and users privacy and free expression. But the order announced today doesn't actually address those legitimate concerns and it isn't the vehicle to fix those problems. Instead, it represents a heavy-handed attempt by the President to retaliate against an American company for not doing his bidding. It must be stopped.

Follow this link:

Trump Executive Order Misreads Key Law Promoting Free Expression Online and Violates the First Amendment - EFF

Related Posts

Comments are closed.