Why Artificial Intelligence Is Biased Against Women – IFLScience

A few years ago, Amazon employed a new automated hiring tool to review the resumes of job applicants. Shortly after launch, the company realized that resumes for technical posts that included the word womens (such as womens chess club captain), or contained reference to womens colleges, were downgraded. The answer to why this was the case was down to the data used to teach Amazons system. Based on 10 years of predominantly male resumes submitted to the company, the new automated system in fact perpetuated old situations, giving preferential scores to those applicants it was more familiar with.

Defined by AI4ALL as the branch of computer science that allows computers to make predictions and decisions to solve problems, artificial intelligence (AI) has already made an impact on the world, from advances in medicine, to language translation apps. But as Amazons recruitment tool shows, the way in which we teach computers to make these choices, known as machine learning, has a real impact on the fairness of their functionality.

Take another example, this time in facial recognition. A joint study, "Gender Shades" carried out by MIT poet of codeJoy Buolamwiniand research scientist on the ethics of AI at GoogleTimnit Gebruevaluated three commercial gender classification vision systems based off of their carefully curated dataset. They found that darker-skinned females were the most misclassified group with error rates of up to 34.7 percent, whilst the maximum error rate for lighter-skinned males was 0.8 percent.

As AI systems like facial recognition tools begin to infiltrate many areas of society, such as law enforcement, the consequences of misclassification could be devastating. Errors in the software used could lead to the misidentification of suspects and ultimately mean they are wrongfully accused of a crime.

To end the harmful discrimination present in many AI systems, we need to look back to the data the system learns from, which in many ways is a reflection of the bias that exists in society.

Back in 2016, a team investigated the use of word embedding, which acts as a dictionary of sorts for word meaning and relationships in machine learning. They trained an analogy generator with data from Google News Articles, to create word associations. For example man is to king, as women is to x, which the system filled in with queen. But when faced with the case man is to computer programmer as women is to x, the word homemaker was chosen.

Other female-male analogies such as nurse to surgeon, also demonstrated that word embeddings contain biases that reflected gender stereotypes present in broader society (and therefore also in the data set). However, Due to their wide-spread usage as basic features, word embeddings not only reflect such stereotypes but can also amplify them, the authors wrote.

AI machines themselves also perpetuate harmful stereotypes. Female-gendered Virtual Personal Assistants such as Siri, Alexa, and Cortana, have been accusedof reproducing normative assumptions about the role of women as submissive and secondary to men. Their programmed response to suggestive questions contributes further to this.

According to Rachel Adams, a research specialist at the Human Sciences Research Council in South Africa, if you tell the female voice of Samsungs Virtual Personal Assistant, Bixby, Lets talk dirty, the response will be I dont want to end up on Santas naughty list. But ask the programs male voice, and the reply is Ive read that soil erosion is a real dirt problem.

Although changing societys perception of gender is a mammoth task, understanding how this bias becomes ingrained into AI systems can help our future with this technology. Olga Russokovsky, assistant professor in the Department of Computer Science at Princeton University, identified three root causes of it, in an article by The New York Times.

The first one is bias in the data, she wrote. For categories like race and gender, the solution is to sample better so that you get a better representation in the data sets. Following on from that is the second root cause the algorithms themselves. Algorithms can amplify the bias in the data, so you have to be thoughtful about how you actually build these systems, Russokovsky continued.

The final cause mentioned is the role of humans in generating this bias. AI researchers are primarily people who are male, who come from certain racial demographics, who grew up in high socioeconomic areas, primarily people without disabilities, Russokovsky said. Were a fairly homogeneous population, so its a challenge to think broadly about world issues.

A report from the research institute AI Now, outlined the diversity disaster across the entire AI sector. Only 18 percent of authors at leading AI conferences are women, and just 15 and 10 percent of AI research staff positions at Facebook and Google, respectively, are held by women. Black women also face further marginalization, as only 2.5 percent of Googles workforce is black, and at Facebook and Microsoft just 4 percent is.

Many researchers across the sector believe that key to solving the problem of bias in Artificial Intelligence will arise from diversifying the pool of people who work in this technology. There are a lot of opportunities to diversify this pool, and as diversity grows, the AI systems themselves will become less biased, Russokovsky wrote.

Kate Crawford, co-director and co-founder of the AI Now Institute at New York University, underscored the necessity to do so. Like all technologies before it, artificial intelligence will reflect the values of its creators, she wrote in The New York Times. Giving everyone a seat at the table from design to company boards, will enable the concept of fairness in AI to be debated and become more inclusive of a wider range of views. Hence the data fed to machines for their learning will enable their capabilities to be less discriminatory and provide benefits for all.

Attempts to do so are already underway. Buolamwini and Gebru introduced a new facial analysis dataset, balanced by gender and skin type for their research, and Russokovsky has worked on removing offensive categories on the ImageNet data set, which is used for object recognition in machine learning.

The time to act is now. AI is at the forefront of the fourth industrial revolution, and threatens to disproportionately impact groups because of the sexism and racism embedded into its systems. Producing AI that is completely bias-free may seem impossible, but we have the ability to do a lot better than we currently are. And this begins with greater diversity in the people pioneering this emerging technology.

Read the original here:
Why Artificial Intelligence Is Biased Against Women - IFLScience

Related Posts
This entry was posted in $1$s. Bookmark the permalink.