4 Ways to Address Gender Bias in AI – Harvard Business Review

Posted: November 25, 2019 at 2:46 pm

Executive Summary

Any examination of bias in AI needs to recognize the fact that these biases mainly stem from humans inherent biases. The models and systems we create and train are a reflection of ourselves. So its no surprise to find that AI is learning gender bias from humans. For instance, natural language processing (NLP), a critical ingredient of common AI systems like Amazons Alexa and Apples Siri, among others, has been found to show gender biasesand this is not a standalone incident. There have been several high profile cases of gender bias, including computer vision systems for gender recognition that reported higher error rates for recognizing women, specifically those with darker skin tones. In order to produce technology that is more fair, there must be a concerted effort from researchers and machine learning teams across the industry to correct this imbalance. We have an obligation to create technology that is effective and fair for everyone.

Any examination of bias in AI needs to recognize the fact that these biases mainly stem from humans inherent biases. The models and systems we create and train are a reflection of ourselves.

So its no surprise to find that AI is learning gender bias from humans. For instance, natural language processing (NLP), a critical ingredient of common AI systems like Amazons Alexa and Apples Siri, among others, has been found to show gender biasesand this is not a standalone incident. There have been several high profile cases of gender bias, including computer vision systems for gender recognition that reported higher error rates for recognizing women, specifically those with darker skin tones. In order to produce technology that is more fair, there must be a concerted effort from researchers and machine learning teams across the industry to correct this imbalance. Fortunately, we are starting to see new work that looks at exactly how that can be accomplished.

Building fair and equitable machine learning systems.

Of particular note is the bias research being carried out with respect to word-embeddings, which is when words are converted to numerical representations, which are then used as inputs in natural language processing models. Word-embeddings represent words as a sequence, or a vector of numbers. If two words have similar meanings, their associated embeddings will be close to each other in a mathematical sense. The embeddings encode this information by assessing the context in which a word occurs. For example, AI has the ability to objectively fill in the word queen in the sentence Man is to king, as woman is to X. The underlying issue arises in cases where AI fills in sentences like Father is to doctor as mother is to nurse. The inherent gender bias in the remark reflects an outdated perception of women in our society that is not based in fact or equality.

Few studies have assessed the effects of gender bias in speech with respect to emotion and emotion AI is starting to play a more prominent role in the future of work, marketing, and almost every industry you can think of. In humans, bias occurs when a person misinterprets the emotions of one demographic category more often than another for instance, mistakenly thinking that one gender category is angry more often than another. This same bias is now being observed in machines and how they misclassify information related to emotions. To understand why this is, and how we can fix it, its important to first look at the causes of AI bias.

What Causes AI Bias?

In the context of machine learning, bias can mean that theres a greater level of error for certain demographic categories. Because there is no one root cause of this type of bias, there are numerous variables that researchers must take into account when developing and training machine-learning models, with factors that include:

Four Best Practices for Machine-Learning Teams to Avoid Gender Bias

Like many things in life, the causes and solutions of AI bias are not black and white. Even fairness itself must be quantified to help mitigate the effects of unwanted bias. For executives who are interested in tapping into the power of AI, but are concerned about bias, its important to ensure that the following happens on your machine-learning teams:

Although examining these causes and solutions is an important first step, there are still many open questions to be answered. Beyond machine-learning training, the industry needs to develop more holistic approaches that address the three main causes of bias, as outlined above. Additionally, future research should consider data with a broader representation of gender variants, such as transgender, non-binary, etc., to help expand our understanding of how to handle expanding diversity.

We have an obligation to create technology that is effective and fair for everyone. I believe the benefits of AI will outweigh the risks if we can address them collectively. Its up to all practitioners and leaders in the field to collaborate, research, and develop solutions that reduce bias in AI for all.

Originally posted here:

4 Ways to Address Gender Bias in AI - Harvard Business Review

Related Posts