How will the EU’s new act deal with online lies and fake news? law – RTE.ie

Posted: February 6, 2021 at 8:37 am

Analysis: the new act represents incremental progress, but it remains far from clear whether its provisions will improve anti-disinformation efforts

Disinformation has become even more amplified in the last year. Covid-19 has proven to be fertile breeding ground for lies, continuing into the 2021 with the vaccine roll out. In Ireland,digital lies circulated following the shooting of George Nkencho, while campaigns of disinformationfloodedlast year's US presidential election.

These events are symptomatic of a social media landscape that continually threatens democratic institutions. In December 2020, the European Union introduced the Digital Services Actto replace outdated rules for digital platforms with standards fit for the 21st century. While hailed by some as awatershed moment for big tech regulation, its role in curtailing digital disinformation remains ambiguous. While the act carries important signs of progress, it represents only an incremental step in the continuing battle between European institutions and anti-democratic actors who seek to influence elections.

We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences

FromRT Radio 1's Ryan Tubridy Show, CNN's Kerryman Donie O'Sullivan onhow fake news and disinformation can be used to cause real harm

The act is the latest attempt by the European Union to regulate how "digital services" operate. The obsoleteE-Commerce Directivehas until now been the flagship European legal framework for regulating digital services in the single market. However, the communication landscape has been significantly altered since the introduction of the directive in 2000. Major platforms like Facebook, Twitterand YouTube were absent from the public sphere back then, but are now important conduits not just for commerce but also for democratic elections. Unsurprisingly, this prompted calls for reform.

Broadly, the act targets how technological platforms safeguard the rights of users and encourages greater transparency in how they operate. It introduces harmonised standards for tackling illegal content, and provides a basic framework to facilitate greater user knowledge into how content is moderated and distributed through algorithms.

Provisions of the act introduce mechanisms for "flagging" unlawful content, challenging content moderation decisions, and insulating against misuse and abuse of "very large online platforms." The act also provides researchers with limited access to data on how the largest platforms operate. Through these necessary developments, the act takes a step in the right direction by incorporating two crucial principles of transparency and accountability.

We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences

FromRT Radio 1'sDrivetime, Orla Twomet fromthe Advertising Standards Authority for Ireland and Niamh Sweeney fromWhatsApp ontackling online disinformation during the Covid-19 pandemic

Less certain is how the act will insulate European democracy from digital disinformation. It is supposed to signify a transition from self-regulation to co-regulation but, inthe context of disinformation, the current self-regulation of platforms simply means that platforms like Facebook and Twitter effectively craft their own rules that determine how they tackle disinformation.

While there have technically been guidelines from the European Commission since 2018 through theCodes of Practice on Disinformation, large platforms have never been legally obliged to incorporate procedures from these guidelines. This lack of enforceability has led to implementation gaps in how different platforms safeguard against disinformation. As a form of co-regulation, the act allows the EU to set an essential framework through a legislative proposal with the platforms filling in details for how these will materialise.

An ongoing problem that the act will not resolve is the issue of harmful but not illegal content. The act focuses on how platforms curtail "illegal" content, and introduces "trusted flaggers" to notify platforms when such content is recognised. New transparency obligations require platforms to share data surrounding how they suspend those who disseminate "manifestly illegal content".

We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences

FromRT Radio 1'sToday with Sean O'Rourke, CaroleCadwalladr from The Observer and US congressman David Cicilline on the spread of fake news and disinformation

Risk assessments that platforms carry out in accordance with the act primarily address how to avoid the misuse of services to disseminate illegal content. However, unlike many other forms of harmful content, disinformation is often not strictly illegal. While co-ordinated disinformation campaigns distort the democratic process, the false content disseminated is often not unlawful, and therefore can avoid the same legal scrutiny that is applied tochild pornography or materialinciting terrorism.

Because the act focuses heavily on illegal content, its main contribution to fighting disinformation is to beef up the existing guidelines under the current Codes of Practice, which the European Commission has pledged to update by Spring 2021. While this comes across as a type of self-regulation 2.0, promising developments are in the legislative pipeline through the European Democracy Action Plan. This includes planned legislation to improve transparency of sponsored political content.

It must also be acknowledged that proposed "risk management" procedures within the act demonstrate an awareness of "intentional manipulation" of services and "inauthentic use of automated exploitation" of networks. While encouraging, these stepsare piecemeal in the bigger picture.

Legislation alone cannot minimise disinformation and theproblem is not as simple as removing individual pieces of "bad" content. Modern disinformation, often not illegal, exists along nodes of a sophisticated and layered system of content distribution. Users as well as platforms shape this system. Sophisticated actors exploit existing social divisions, and weaponise platforms in dynamic and inventive ways. This is evident through WhatsApp's involvement in spreading Covid-19 misinformation in Irelandand widespreadlies on TikTok before the US presidential election.

The act is playing a game of catch-up with technology. What's fundamentally needed are legal frameworks that can both keep up with and anticipate technological changes. For disinformation, the act is not the holy grail, but hopefully lays regulatory scaffolding for more concrete progress.

The views expressed here are those of the author and do not represent or reflect the views of RT

Continue reading here:

How will the EU's new act deal with online lies and fake news? law - RTE.ie

Related Posts