Since Platform-by-Platform Censorship Doesn’t Work, These Researchers Think, the Government Should Help ‘Halt the Spread of Misinformation’ – Reason

Posted: August 26, 2021 at 3:05 am

Before Twitter banned thenPresident Donald Trump in response to the January 6 Capitol riot, the platform tried to police his false claims about election fraud by attaching warning labels or blocking engagement with them. A new study suggests those efforts were ineffective at preventing the spread of Trump's claims and may even have backfired by drawing additional attention to messages that Twitter deemed problematic.

Those findings highlight the limits of content moderation policies aimed at controlling misinformation and, more generally, the futility of responding to lies by trying to suppress them. But the researchers think their results demonstrate the need to control online speech "at an ecosystem level," with an assist from the federal government.

The study, published today in Harvard'sMisinformation Review, looked at Trump tweets posted from November 1, 2020, through January 8, 2021, that Twitter flagged for violating its policy against election-related misinformation. Zeve Sanderson and four other New York University social media researchers found that tweets with warning labels "spread further and longer than unlabeled tweets." And while blocking engagement with messages was effective at limiting their spread on Twitter, those messages "were posted more often and received more visibility on other popular platforms than messages that were labeled by Twitter or that received no intervention at all."

Sanderson et al. caution that these correlations do not necessarily mean that Twitter's interventions boosted exposure to Trump's claims, since the explanation could be that "Twitter intervened on posts that were more likely to spread." But the results are at least consistent with the possibility that flagging tweets or blocking engagement with them added to their allure. Either way, those measures demonstrably did not stop Trump from promoting his fantasy of a stolen election.

The problem, as Sanderson and his colleagues see it, is insufficient cooperation across platforms. They suggest the government should do more to overcome that problem.

"Taken together, these findings introduce compelling new evidence for the limited impact of one platform's interventions on the cross-platform diffusion of misinformation, emphasizing the need to consider content moderation at an ecosystem level," the researchers write. "For state actors, legislative or regulatory actions focused on a narrow band of platforms may fail to curb the broader spread of misinformation. Alarmingly, YouTube has been largely absent from recent Congressional hearingsas well as from academic and journalistic workeven though the platform is broadly popular and served as a vector of election misinformation."

Just to be clear: Sanderson and his colleagues don't think it is "alarming" when the federal government pressures social media companies to suppress speech it considers dangerous. The alarming thing, as far as they are concerned, is that the pressure, including "legislative or regulatory actions" as well as congressional hearings, is not applied more broadly.

"Political actors seeking to advance a narrative online are not limited to working within a single platform," study coauthor Joshua Tucker complainsin an interview with USA Today. "People who are trying to control information environments and who are trying to push political information environments are in a multiplatform world. Right now, the only way we have to deal with content is on a platform-by-platform basis."

Megan Brown, another coauthor, suggests that the problem could be remedied if social media platforms reached an agreement about which kinds of speech are acceptable. "Misinformation halted on one platform does not halt it on another," she observes. "In the future, especially with respect to the ongoing pandemic and the 2022 midterms coming up, it will be really important for the platforms to coordinate in some way, if they can, to halt the spread of misinformation."

And what if they can'tor, more to the point, won't? Given the emergence of multiple social media platforms whose main attraction is their laissez-faire approach to content moderation, this scenario seems pretty unlikely. It would require coercion by a central authority, which would be plainly inconsistent with the First Amendment. And even government-mandated censorship would not "halt the spread of misinformation." As dictators across the world and throughout history have discovered, misinformation (or speech they place in that category) wants to be free, and it will find a way.

This crusade to "halt the spread of misinformation" should trouble anyone who values free speech and open debate. The problem of deciding what counts as misinformation is not an inconvenience that can be overcome by collaboration. Trump's claim that Joe Biden stole the presidential election may seem like an easy call. Likewise anti-vaccine warnings about microchips, infertility, and deadly side effects. But even statements that are not demonstrably false may be deemed dangerously misleading, or not, depending on the censor's perspective.

The Biden administration's definition of intolerable COVID-19 misinformation, for example, clearly extends beyond statements that are verifiably wrong. "Claims can be highly misleading and harmful even if the science on an issue isn't yet settled," says Surgeon General Vivek Murthy, who urges a "whole-of-society" effort, possibly encouraged by "legal and regulatory measures," to combat the "urgent threat to public health" posed by "health misinformation." Given the many ways that the federal government can make life difficult for social media companies, they have a strong incentive to cast a net wide enough to cover whatever speech the administration considers "misleading," "harmful," or unhelpful.

Meanwhile, the companies that refuse to play ball will continue to offer alternatives for people banished from mainstream platforms, as the NYU study demonstrates. Leaving aside the question of whether interventions like Twitter's perversely promote the speech they target, they certainly reinforce the conviction that the government and its collaborators are trying to hide inconvenient truths. They also drive people with mistaken beliefs further into echo chambers where their statements are less likely to be challenged. The alternativerebutting false claims by citing countervailing evidencemay rarely be successful. But at least it offers a chance of persuading people, which is how arguments are supposed to be resolved in a free society.

Read more:
Since Platform-by-Platform Censorship Doesn't Work, These Researchers Think, the Government Should Help 'Halt the Spread of Misinformation' - Reason

Related Posts