Artificial Intelligence is not the cure for the COVID-19 infodemic | TheHill – The Hill

More than 3 billion peoplearound 50 percent of the worlds populationengage with and post content online. Some of that content is misleading and potentially harmful, whether by design or as a side effect of its spread and manipulation. With the billions of daily active users on social media platforms, even if a mere 0.1 percent of total content contains mis or disinformation, there is a vast volume of content to review.

In response to this challenge, automated content review technologies have emerged as an enticing and scalable solution to help triage mis/disinformation online. Yet, while many technology companies and social media platforms have promoted artificial intelligence (AI) as an omnipotent tactic to address mis/disinformation, AI is not a panacea for information challenges.

In 2015, Microsoft co-founder Bill Gates gave a TED Talk that stressed our lack of preparation for the next pandemic. Fast forward to 2020, and conspiracy theorists have wrongfully used Gates TED talk as evidence to suggest that the novel coronavirus was his doing. The Gates conspiracy fueled misleading posts promoting alarming behavior, such as the creation of a dangerous and ineffective homemade, bleach-based Miracle Mineral Solution for preventing COVID-19. In this case, the repurposing of Gates words and actions made it harder for people to find and discern reliable guidance when they needed it.

The Gates conspiracy exemplifies what the World Health Organization describes as the online infodemic drawing an almost eerie linkage between the public health effects of both the viral dynamics of biology and those of online content. Ironically, these false narratives spread on the very devices and platforms that Gates and his technology sector colleagues created. These technology platformsmany of which are now powered by AI enable the viral spread of a range of mis and disinformation.

Despite what some AI evangelists may claim, AI is not equipped to independently interpret content and judge mis/disinformation. Identifying the many flavors of mis/disinformation often requires nuanced, human assessment, especially when compared to other forms of problematic content.

Content judgments are inherently complex, and they are particularly difficult during COVID-19. As the health and scientific communitys understanding of the virus continues to evolve minute by minute, the boundaries of accurate and misleading information dynamically shift. What was misleading last week (wear a mask for protection) is now the most up to date, accurate information. Further complicating an AI response, much mis/disinformation often maintains a bit of truth and authenticity. A satirical video created with no intention to cause harm may convey a point through sarcasm, or it may fool viewers who do not understand the satire. When a conspiracy theorist tweets a link to the Gates TED talk with their own misleading caption, the meaning of the original, authentic TED talk video changes with that added context.

Even if our AI systems, or AI systems in combination with human review, could perfectly and accurately adjudicate content, at the end of the day, it is human beings that must make sense of that content online. While labeling content has been touted as an effective mechanism for providing adequate disclosure to audiences making sense of material on platforms, thereby mitigating the impact of mis/disinformation, very little is known about the consequences of these labels and how audiences respond to them. Even if platforms label videos or images as manipulated, users may not understand the video to be misleading. Or worse, users may judge video or images from credible sources which do not have labeling as inherently false.

Any efforts considering broad deployment of AI systems for detecting mis/disinformation must seek to understand the human factors that are deeply integral to the success of this work. During periods of global pandemics such as COVID-19, it becomes even more vital to involve diverse, global voices of real people in both the micro content review processes and macro policy and technology decisions around mis/disinformation. Mis/disinformation challenges must be addressed in collaboration with those in civil society, media entities, and alongside social and behavioral science researchers who are already considering the human dynamics affecting public discourse. Coordinating across these disparate skillsets is difficult, but it is vital to capturing the humanistic complexity of mis/disinformation challenges that often fall on technology companies.

COVID-19 obeys the laws of science and biology, but it also affects the human body and psyche in equally impactful doses. And while mis/disinformation has certain dynamics that can be embedded into artificially intelligent systems, it also emerges in formats and contexts that are only interpretable by human judgment. Technology companies must not think about combatting mis/disinformation solely as an engineering challenge. It is a challenge that involves complex human dynamics and requires the involvement of sectors and organizations well beyond Seattle and Silicon Valley.

Claire Leibowicz is program lead on Partnership on AI.

Excerpt from:
Artificial Intelligence is not the cure for the COVID-19 infodemic | TheHill - The Hill

Related Posts
This entry was posted in $1$s. Bookmark the permalink.