We can’t trust big tech or the government to weed out fake news, but a public-led approach just might work – The Conversation AU

Posted: February 25, 2021 at 1:56 am

The federal governments News Media and Digital Platforms Mandatory Bargaining Code, which passed the Senate today, makes strong points about the need to regulate misinformation.

In response, Google, Facebook, Microsoft, TikTok, Redbubble and Twitter have agreed to abide by a code of conduct targeting misinformation.

Suspiciously, however, the so-called Australian Code of Practice on Disinformation and Misinformation was developed by, well, these same companies. Behind it is the Digital Industries Group (DIGI), an association formed by them and some other companies.

In self-regulating, they hope to show the government theyre addressing the proliferation of misinformation (false content spread despite intent to deceive) and disinformation (content that intends to deceive) on their platforms.

But the only real commitment under the code would be to appear to be doing something. Since the code is voluntary, the platforms signed up can basically opt in to the measures at their own discretion.

The code suggests platforms might release data trends about known misinformation, or might label known false content or content spread by seemingly unreliable sources. They might identify and restrict paid political ads trying to deceive users, or they might reveal the sources of misinformation.

These are all great actions the platforms might take, as they arent bound by the code. Rather, the code will likely encourage them to police misinformation around an issue of the day by taking visible action around one topic, without confronting the spread of other profitable false information on their platforms.

The consequences of this would be great. False news can lead to dangerous conspiracies and armed attacks. It can even influence elections, which we saw in 2019 when Facebook hosted posts claiming the Labor party would introduce a death tax on inheritance. Things quickly spiralled.

The government has promised tougher regulation of misinformation if it feels the voluntary code isnt working. Although, we should be careful about allowing the powerful regulate the powerful.

Its unclear, for instance, whether the Morrison government would view posts about a supposed Labor death tax as being a real threat to democracy even though this is misinformation.

Read more: How political parties legally harvest your data and use it to bombard you with election spam

Regulating speech on the internet is difficult. In particular, misinformation is hard to define because often the distinction between genuinely dangerous misinformation, and valued myth or opinion, is based on a communitys values.

The latter is information that may not be accurate but which people still have a right to express. For instance:

Nickelback is the best band on the planet.

This is probably untrue. But the statement is relatively harmless. While the actual truthfullness is lacking, its subjective nature is clear. Considering this nuance, the solution then is for misinformation to be policed by the community itself, not an elite body.

Reset Australia, an independent group that targets digital threats to democracy, recently proposed a project in which interested tech platforms and members of the public could be subscribed to a live list of the most popular misinformation content.

A citizen-run jury could monitor the list to help ensure public oversight. This would involve the whole public sphere in the debate about misinformation, not just the government and platforms.

Once fake news is in the open, it becomes easier for public figures, journalists and academics to expose.

Another effective strategy would be to create a national register of misinformation sources and content. Anyone could register what they think is misinformation to the Australian Communications and Media Authority, helping it quickly identify malicious sources and alert the platforms.

Digital platforms already do this internally, both through moderators and and by allowing the public to report posts. But they dont show how posts are judged and dont release the data. By creating a public register, ACMA could monitor whether platforms are self-regulating effectively.

Such a register could also keep a record of legitimate and illegitimate information sources and give each one a reputation score. People who accurately reported misinformation could also receive high ratings, similar to Ubers ratings for drivers and passengers.

While this wouldnt restrict anyones right to expression, it would be easier to point to the reliability of the source of information.

Its worth noting this type of community-based peer review system would be open to potential abuse. Movie review site Rotten Tomatoes has had serious problems with people trolling film reviews.

For example, Captain Marvel was awarded a low audience rating because toxic online communities decided they didnt like the idea of a female superhero, so they coordinated to rate the film poorly. But the platform was able to identify this pattern of behaviour.

The site ultimately protected the films score by ensuring only people who had bought a ticket to see the movie could rate it. While any system is open to abuse, so is self regulation and communities have shown they can (and are willing to) solve such problems.

Wikipedia is another community-driven peer review resource and one which most people consider highly valuable. It works because there are enough people in the world who care about the truth.

Judging the accuracy of claims made in public allows for a consensus that is open to be challenged. On the other hand, leaving decisions about truth to private companies or political parties could actually exacerbate the misinformation problem.

The news media bargaining code has finally passed. Facebook is set to bring news back to Australia, as well as start making deals to pay local news publishers for content.

The agreement between the government and Facebook which serves the interests of those parties seems like just another echo of the past. Large media players will retain some revenue and Google and Facebook will continue to expand their immense control of the internet.

Meanwhile, users remain reliant on the benevolence of tech platforms to do just enough about misinformation to satisfy the government of the day. We should be careful about surrendering power to both platforms and governments.

This new code wont force significant change out of either, despite the pressing need for it.

Read more: Google is leading a vast, covert human experiment. You may be one of the guinea pigs

See the article here:

We can't trust big tech or the government to weed out fake news, but a public-led approach just might work - The Conversation AU

Related Posts