NIST Announces Workshop Focused on "Bias in AI" – JD Supra

Posted: June 20, 2020 at 10:41 am

As momentum builds to address race-based injustices in America, the National Institute of Standards and Technology (NIST) last week announced a workshop focused on understanding and addressing bias in Artificial Intelligence (AI) systems. The event will bring together members of the public and private sector to seek consensus on what 'bias' means in the context of AI and how to measure it.

NIST believes that finding common ground on these questions 'will lay important groundwork for upcoming efforts in NIST's AI work more broadly, including the development of standards and recommendations for achieving trustworthy AI.' The workshop will be held virtually on August 18, 2020, and organizations looking to take concrete actions to reduce biases based on race, ethnicity, gender, sexuality, and other protected characteristics in their products should consider participating.

The benefits and utility of AI are now well establishedimplementing systems and processes that leverage AI and machine learning can enhance efficiency, increase output, deliver insights, and much more. In addition, using AI to make or facilitate decisions provides an opportunity for organizations to eliminate or reduce explicit bias that may arise from humans making certain decisions, but algorithms will still reflect prejudices in the data they see.

Uncritically implementing facial recognition AI, for example, can result in high error rates when trying to identify racial minorities if almost all of the faces in the data that trained the AI were white. In other contexts, AI can unwittingly reproduce prejudices against women, members of the LGBTQ community, or other marginalized groups. And because AI systems are not always transparentsome systems do not explain the reasons for their decisionsit can be difficult to detect any inherent biases that may arise from incomplete or inaccurate training data.

The potential for bias in AI systems has become a significant concern in recent years, recognized by privacy advocates, academics, and many private companies. Some states and the federal government have also addressed the issue in a variety of ways:

Given the plethora of government and private-sector efforts to address bias in AI, what might NIST's role be? As its name suggests, NIST specializes in creating standards, and the announcement of the workshop this August states that it will focus on (1) how to define 'bias' in the context of AI and (2) how to measure such bias.

Making progress on the definition of bias in AI would be a significant achievement since there is so little consensus regarding the meaning of many terms in this field. For example, New York City's AI report uses the term 'disproportionate impact' without defining it; Washington State's legislation similarly uses the terms 'bias' and 'disparate impact' without specifying their meaning; and Illinois's Artificial Intelligence Video Interview Act does not even define 'artificial intelligence.'

NIST views this workshop as one step in its ongoing work on AI issues. As noted above, one of the long-term goals mentioned in the announcement is to lay important groundwork for upcoming efforts in NIST's AI work more broadly, including the 'development of standards and recommendations for achieving trustworthy AI.'

This language suggests that NIST may hope eventually to produce a framework for organizations to use to mitigate bias in their AI systems, similar to the highly influential NIST Cybersecurity Framework. Ideally, NIST could produce another framework for developing 'trustworthy AI' that is flexible enough to be implemented by both startups and large multinational corporations; imaginative enough to remain relevant in the rapidly changing world of AI for years to come; and comprehensive enough that state, local, and federal governments do not feel the need to pass significant additional top-down laws or regulations in this space.

The NIST Cybersecurity Framework was developed with significant feedback from industry, and many organizations now voluntarily comply with it due to its strong reputation and flexibility.

Companies that want to provide input during NIST's process of developing standards and recommendations related to bias in AIwhatever form those may eventually takeshould consider participating in the Bias in AI Workshop this August 18. And attending the workshop can help organizations ensure that they maintain their commitment to fighting prejudice even after injustices based on race, gender, or other statuses fade from the front page.

[View source.]

See more here:

NIST Announces Workshop Focused on "Bias in AI" - JD Supra

Related Posts