California Seems To Be Taking The Exact Wrong Lessons From Texas And Florida’s Social Media Censorship Laws – Techdirt

Posted: June 24, 2022 at 9:47 pm

from the who-does-this-help? dept

This post analyzes California AB 587, self-described as Content Moderation Requirements for Internet Terms of Service. I believe the bill will get a legislative hearing later this month.

A note about the draft Im analyzing,posted here. Its dated June 6, and its different from theversion publicly posted on the legislatures website(dated April 28). Im not sure what the June 6 drafts redlines compare tomaybe the bill as introduced? Im also not sure if the June 6 draft will be the basis of the hearing, or if there will be more iterations between now and then. Its exceptionally difficult for me to analyze bills that are changing rapidly in secret. When bill drafters secretly solicit feedback, every other constituency cannot follow along or share timely or helpful feedback. Its especially ironic to see non-public activity for a bill thats all aboutmandating transparency. _()_/

Whos Covered by the Bill?

The bill applies to social media platforms that: (A) Construct a public or semipublic profile within a bounded system created by the service. (B) Populate a list of other users with whom an individual shares a connection within the system. [and] (C) View and navigate a list of connections made by other individuals within the system.

This definition of social media has been around for about a decade, and its awful.Critiques I made8 years ago:

First, what is a semi-public profile, and how does it differ from a public or non-public profile? Is there even such a thing as a semi-private or non-public profile?

Second, what does a bounded system mean?The bounded system phrase sounds like a walled garden of some sort, but most walled gardens arent impervious. So what delimits the boundaries the statute refers to, and what does an unbounded system look like?

I also dont understand what constitutes a connection, what a list of connections means, or what it means to populate the connection list. This definition of social media was never meant to be used as a statutory definition, and every word invites litigation.

Further, the legislature shouldbut surely has notrun this definition through a test suite to make sure it fits the legislatures intent. In particular, which, if any, services offering user-generated content (UGC) functionality do NOT satisfy this definition? Though decades of litigation might ultimately answer the question, I expect that the language likely covers all UGC services.

[Note: based on a quick Lexis search, I saw similar statutory language in about 20 laws, but I did not see any caselaw interpreting the language because I believe those laws are largely unused.]

The bill then excludes some UGC services:

The Laws Requirements

Publish the TOS

The bill requires social media platforms to post their terms of service (TOS), translated into every language they offer product features in. It defines TOS as:

a policy or set of policies adopted by a social media company that specifies, at least, the user behavior and activities that are permitted on the internet-based service owned or operated by the social media company, and the user behavior and activities that may subject the user or an item of content to being actioned. This may include, but is not limited to, a terms of service document or agreement, rules or content moderation guidelines, community guidelines, acceptable uses, and other policies and established practices that outline these policies.

To start, I need to address the ambiguity of what constitutes the TOS because its the most dangerous and censorial trap of the bill. Every service publishes public-facing editorial rules, but the published versions never can capture ALL of the services editorial rules. Exceptions include: private interpretations that are not shared to protect against gaming, private interpretations that are too detailed for public consumption, private interpretations that governments ask/demand the services dont tell the public about, private interpretations that are made on the fly in response to exigencies, one-off exceptions, and more.

According to the bills definition, failing to publish all of these non-public policies and practices before taking action based on them could mean noncompliance with the bills requirements. Given the inevitability of such undisclosed editorial policies, it seems like every service always will be noncompliant.

Furthermore, to the extent the bill inhibits services from making an editorial decision using a policy/practice that hasnt been pre-announced, the bill would control and skew the services editorial decisions. This pre-announcement requirement would have the same effect as Floridas restrictions on updating their TOSes more than once every 30 days (the 11th Circuit heldthat restriction was unconstitutional).

Finally, imagine trying to impose a similar editorial policy disclosure requirement on a traditional publisher like a newspaper or book publisher. They currently arent required to disclose ANY editorial policies, let alone ALL of them, and I believe any such effort to require such disclosures would obviously be struck down as an unconstitutional intrusion into the freedom of speech and press.

In addition to requiring the TOSs publication, the bill says the TOS must include (1) a way to contact the platform to ask questions about the TOS, (2) descriptions of how users can complain about content and the social media companys commitments on response and resolution time. (Drafting suggestion for regulated services: We do not promise to respond ever), and (3) A list of potential actions the social media company may take against an item of content or a user, including, but not limited to, removal, demonetization, deprioritization, or banning. I identified 3 dozen potential actions in myContent Moderation Remedies article, and Im sure more exist or will be developed, so the remedies list should be long and Im not sure how a platform could pre-announce the full universe of possible remedies.

Information Disclosures to the CA AG

Once a quarter, the bill would require platforms to deliver to the CA AG the current TOS, a complete and detailed description of changes to the TOS in the prior quarter, and a statement of whether the TOS defines any of the following five terms and what the definitions are: Hate speech or racism, Extremism or radicalization, Disinformation or misinformation, Harassment, and Foreign political interference. [If the definitions are from the TOS, cant the AG just read that?]. Ill call the enumerated five content categories the Targeted Constitutionally Protected Content.

In addition, the platforms would need to provide a detailed description of content moderation practices used by the social media. This seems to contemplate more disclosures than just the TOS, but that definition seemingly already captured all of the services content moderation rules. I assume the bill wants to know how the services editorial policies are operationalized, but it doesnt make that clear. Plus, like Texas open-ended disclosure requirements, the unbounded disclosure obligation ensures litigation over (unavoidable) omissions.

Beyond the open-ended requirement, the bill enumerates an overwhelmingly complex list of required disclosures, which are far more invasive and burdensome than Texas plenty-burdensome demands:

All told, there are 7 categories of disclosures, and the bill indicates that the disclosure categories have, respectively, 5 options, at least 5 options, at least 3 options, at least 5 options, and at least 5 options. So I believe the bill requires that each services reports should include no less than 161 different categories of disclosures (75+75+73+75+75).

Who will benefit from these disclosures? At minimum, unlike the purported justification cited by the 11th Circuit for Floridas disclosure requirements, the bills required statistics cannot help consumers make better marketplace choices. By definition, each service can define each category of Targeted Constitutionally Protected Content differently, so consumers cannot compare the reported numbers across services. Furthermore, because services can change how these define each content category from time to time, it wont even be possible to compare a services new numbers against prior numbers to determine if they are getting better or worse at managing the Targeted Constitutionally Protected Content. Services could even change their definitions so they dont have to report anything. For example, a service could create an omnibus category of incivil content/activity that includes some or all of the Targeted Constitutionally Protected Content categories, in which case they wouldnt have to disclose anything. (Note also that this countermove would represent a change in the services editorial practices impelled by the bill, which exacerbates the constitutional problem discussed below). So who is the audience for the statistics and what, exactly, will they learn from the required disclosures? Without clear and persuasive answers to these questions, it looks like the state is demanding the info purely as a raw exercise of power, not to benefit any constituency.

Remedies

Violations can trigger penalties of up to $15k/violation/day, and the penalties should at minimum be sufficient to induce compliance with this act but should be mitigated if the service made a reasonable, good faith attempt to comply. The AG can enforce the law, but so can county counsel and city DAs in some circumstances. The bill provides those non-AG enforcers with some financial incentives to chase the penalty money as a bounty.

An earlier draft of the bill expressly authorized private rights of action via B&P 17200. Fortunately, that provision got struckbut, unfortunately, in its place theres a provision saying that this bill is cumulative with any other law. As a result, I think the 17200 PRA is still available. If so, this bill will be a perpetual litigation machine. I would expect every lawsuit against a regulated service would add 587 claims for alleged omissions, misrepresentations, etc. Like the CCPA/CPRA, the bill should clearly eliminate all PRAsunless the legislature wants Californians suing each other into oblivion.

Some Structural Problems with the Bill

Although the prior section identified some obvious drafting errors, fixing those errors wont make this a good bill. Some structural problems with the bill that cant be readily fixed.

The overall problem with mandatory editorial transparency. I just wrote awhole paper explaining why mandatory editorial transparency laws like AB 587 are categorically unconstitutional, so you should start with that if you havent already read it. To summarize, the disclosure requirements about editorial policies and practices functionally control speech by inducing publishers to make editorial decisions that will placate regulators rather than best serve the publishers audience. Furthermore, any investigation of the mandated disclosures puts the government in the position of supervising the editorial process, an unhealthy entanglement. I already mentioned one such example where regulators try to validate if the service properly described when it does manual vs. automated content moderation. Such an investigation would necessarily scrutinize and second-guess every aspect of the services editorial function.

Because of these inevitable speech restrictions, I believe strict scrutiny should apply to AB 587 without relying on the confused caselaw involving compelled commercial disclosures. In other words, I dont thinkZauderera recent darling of the pro-censorship crowdis the right test (I will have more to say on this topic). Further, Zauderer only applies when the disclosures are uncontroversial and purely factual, but the AB587 disclosures are neither. The Targeted Constitutionally Protect Content categories all involve highly political topics, not the pricing terms at issue in Zauderer; and the disclosures require substantial and highly debatable exercises of judgments to make the classifications, so they are not purely factual. And even if Zauderer does apply, I think the disclosure requirements impose an undue burden. For example, if 161 different prophylactic just-in-case disclosures dont constitute an undue burden, I dont know what would.

The TOS definition problem. As I mentioned, what constitutes part of the TOS creates a litigation trap easily exploited by plaintiffs. Furthermore, if it requires the publication of policies and practices that justifiably should not be published, the law intrudes into editorial processes.

The favoritism shown to the Targeted Constitutionally Protected Content. The law privileges the five categories in the Targeted Constitutionally Protected Content for heightened attention by services, but there are many other categories of lawful-but-awful content that are not given equal treatment. Why?

This distinction between types of lawful-but-awful speech sends the obvious message to services that they need to pay closer attention to these content categories over the others. This implicit message to reprioritize content categories distorts the services editorial prerogative, and if services get the message that they should manage the disclosed numbers down, the bill reduces constitutionally protected speech. However, services wont know if they should be managing the numbers down. The AG is a Democrat, so hes likely to prefer less lawful-but-awful content. However, many county prosecutors in red counties (yes, California has them) may prefer less content moderation of constitutionally protected speech and would investigate if they see the numbers trending down. Given that services are trapped between these competing partisan dynamics, they will be paralyzed in their editorial decision-making. This reiterates why the bill doesnt satisfy Zauderer uncontroversial prong.

The problem classifying the Targeted Constitutionally Protected Content. Determining what fits into each category of the Targeted Constitutionally Protected Content is an editorial judgment that always will be subject to substantial debate. Consider, for example, how often the Oversight Board has reversed Facebook on similar topics. The plaintiffs can always disagree with the services classifications, and that puts them in the role of second-guessing the services editorial decisions.

Social media exceptionalism. As Benkler et als book Network Propaganda showed, Fox News injects misinformation into the conversation, which then propagates to social media. So why does the bill target social media and not Fox News? More generally, the bill doesnt explain why social media needs this intervention compared to traditional publishers or even other types of online publishers (say, Breitbart?). Or is the states position that it could impose equally invasive transparency obligations on the editorial decisions of other publishers, like newspapers and book publishers?

The favoritism shown to the excluded services. I think the state will have a difficult time justifying why some UGC services get a free pass from the requirements. It sure looks arbitrary.

The Dormant Commerce Clause. The bill does not restrict its reach to California. This creates several potential DCC problems:

Conclusion

Stepping back from the details, the bill can be roughly divided into two components: (1) the TOS publication and delivery component, and (2) the operational disclosures and statistics component. Abstracting the bill at this level highlights the bills pure cynicism.

The TOS publication and delivery component is obviously pointless. Any regulated platform already posts its TOS and likely addresses the specified topics, at least in some level of generality (and an obvious countermove to this bill will be for services to make their public-facing disclosures more general and less specific than they currently are). Consumers can already read those onsite TOSes if they care; and the AGs office can already access those TOSes any time it wants. (Heck, the AG can even set up bots to download copies quarterly, or even more frequently, and I wonder if the AGs office has ever used the Wayback Machine?). So if this provision isnt really generating any new disclosures to consumers, its just creating technical traps that platforms might trip over.

The operational disclosures and statistics component would likely create new public data, but as explained above, its data that is worthless to consumers. Like the TOS publication and delivery provision, it feels more like a trap for technical enforcements than a provision that benefits California residents. Its also almost certainly unconstitutional. The emphasis on Targeted Constitutionally Protected Content categories seems designed to change the editorial decision-making of the regulated services, which is a flat-out form of censorship; and even if Zauderer is the applicable test, it seems likely to fail that test as well.

So if this provision gets struck and the TOS publication and delivery provision doesnt do anything helpful, it leaves the obvious question: why is the California legislature working on this and not the many other social problems in our state? The answer to that question is surely dispiriting to every California resident.

Reposted, with permission, from Eric Goldmans Technology & Marketing Law Blog.

Filed Under: ab 587, california, content moderation, disclosures, internet regulations, terms of service, transparency

View original post here:
California Seems To Be Taking The Exact Wrong Lessons From Texas And Florida's Social Media Censorship Laws - Techdirt

Related Posts