Fake News: A look into the Australian Code of Practice on Disinformation and Misinformation – Lexology

Posted: June 28, 2021 at 10:20 pm

The Code

The Australian Code of Practice on Disinformation and Misinformation (the Code) commenced on 22 February 2021, around 12 months after the Australian Government asked digital platforms to develop a voluntary code to address disinformation and misinformation and assist users of their services to more easily identify the reliability, trustworthiness and source of news content.

The request is part of a broader Australian Government strategy to reform the technology and information dissemination landscape and implement certain recommendations made by the ACCC in the Digital Platforms Inquiry.

The Australian Communications and Media Authority (ACMA) oversaw development of the Code, which was developed by industry association. Later this month, ACMA is due to report to the Australian Government on whether the actions and responses of those digital platforms that have adopted the Code sufficiently respond to the concerns identified by the ACCC regarding harmful misinformation and disinformation. The Government will then consider the need for further measures including potentially, the introduction of mandatory regulation.

So far, voluntary signatories to the Code include Twitter, Google, Facebook, Microsoft, Redbubble, TikTok, Adobe and Apple. However, the Code encourages all other participants in the digital information sphere to use the Code as a guide to best practice in developing their own response to the evolving challenges of harmful disinformation and misinformation.

In anticipation of the ACMA report, this article explains the key features of the Code, key themes of the first signatory reports submitted under the Code, how the Code fits into the broader regulatory landscape of online content in Australia, and whats next.

Key features of the Code

The Code targets misinformation and disinformation which threatens to undermine democratic and policy making processes or public goods such as public health, safety, security or the environment (Harm).

Both misinformation and disinformation are defined as digital content that is verifiably false or misleading or deceptive, propagated by users of digital platforms and is reasonably likely to cause Harm. Misinformation is often legal digital content and may not have clearly intended to cause Harm, whereas disinformation captures behaviours which intend to artificially influence users online conversations and/or to encourage users of digital platforms to spread digital content, and the propagation of digital content via spam and other forms of deceptive, manipulative or bulk, aggressive behaviours.

There are two key requirements for signatories under the Code:

As the Code is voluntary, a signatory may withdraw from the Code or a particular commitment at any time.

What do the commitments actually require?

The Core Objective of the Code requires a signatory to:

The additional objectives a signatory may choose to adopt will depend on how content is delivered on its platform (e.g. a user-generated content platform would likely adopt different measures to a search engine). For example, a signatory could commit to implement measures that empower consumers to make better informed choices of digital content. This could take the form of returning diverse perspectives on matters of public interest in response to an online search request, a signal to users indicating the credibility of a news source, or enabling a user to check the authenticity or accuracy of online content or to identify the source of political advertising.

The Code also provides examples of how the objectives and outcomes may be met, but these are guidelines only and each signatory can decide how it will moderate harmful misinformation and disinformation on its platform.

More than just arbitrary evaluation

Digital platforms (including the current signatories) already evaluate and moderate content to varying degrees in accordance with their own discrete policies. The Code offers industry a clear, unifying objective, without reducing the flexibility digital platforms have in the way they choose to moderate content. It encourages them to be more accountable in their role as facilitators of free speech and open exchange of opinion, information debate and conversation, by turning the focus on their response to Harms caused by disinformation and misinformation. The common reporting requirement will also help digital platforms and other stakeholders to evaluate their practices against the practices of other industry participants.

The Code goes beyond self-assessment. It requires an industry facility for non-compliance to be established within six months of its commencement (approximately by the end of August 2021), and the establishment of an industry sub-committee to review the actions of signatories and their compliance on a six-monthly basis. The Code will also be reviewed after one year, and then every two years after that, by industry, government and other stakeholders. These additional mechanisms should encourage greater responsiveness and engagement from signatories.

The outcomes of the first reports

The eight current signatories to the Code issued their reports in response to the Code in May 2021. All of the signatories were able to demonstrate to varying degrees how their existing policy framework is aligned to the core objective and the other objectives they choose to adopt, and explained their areas of focus for the future.

Key themes from the reports include:

How the Code fits into online safety regulation in in Australia

The voluntary Code joins other laws and regulations which operate to address online safety in Australia, such as the Enhancing Online Safety Act 2015 (Cth) and the impending Online Safety Act, but sets itself apart by focusing on digital content that is false, misleading or deceptive.

Whats next

The measures taken by digital platforms in response to violations will be under review, with the ACMA poised to assess the effectiveness of the Code in addressing disinformation and misinformation on digital platforms later this month. Last year, the European Commission assessed the effectiveness of the similar, voluntary EU Code of Practice on Disinformation. It found a number of shortcomings owing to that codes self-regulatory nature, and recommended a number of measures to improve consistency of the key concepts of misinformation and disinformation, and appointing a regulatory body to enforce compliance with the code. Two things the Australian Code itself, does not prescribe.

Here is the original post:

Fake News: A look into the Australian Code of Practice on Disinformation and Misinformation - Lexology

Related Posts