AI robots learning racism, sexism and other prejudices from humans, study finds – The Independent

Posted: April 13, 2017 at 11:49 pm

Artificially intelligent robots and devices are being taught to be racist, sexist and otherwise prejudiced by learning from humans, according to new research.

A massive study of millions of words online looked at how closely different terms were to each other in the text the same way that automatic translators use machine learningto establish what language means.

Some of the results were stunning.

The researchers found male names were more closely associated with career-related terms than female ones, which were more closely associated with words related to the family.

This link was stronger than the non-controversial findings that musical instruments and flowers were pleasant and weapons and insects were unpleasant.

Female names were also strongly associated with artistic terms, while male names were found to be closer to maths and science ones.

There were strong associations, known as word embeddings, between European or American names and pleasant terms, and African-American names and unpleasant terms.

The effects of such biases on AIcan be profound.

For example Google Translate, which learns what words mean by the way people use them, translates the Turkish sentence O bir doktor into he is a doctor in English, even though Turkish pronouns are not gender specific. So, it can actually mean he is a doctor or she is a doctor.

But change doktor to hemsire, meaning nurse, in the same sentence and this is translated as she is a nurse.

Last year, a Microsoft chatbot called Taywas given its own Twitter account and allowed to interact with the public.

It turned into a racist, pro-Hitler troll with a penchant for bizarre conspiracy theoriesin just 24 hours. [George W]Bush did 9/11 and Hitler would have done a better job than the monkey we have now, it wrote. Donald Trump is the only hope weve got.

In a paper about the new study in the journal Science, the researchers wrote: Our work has implications for AI and machine learning because of the concern that these technologies may perpetuate cultural stereotypes.

Our findings suggest that if we build an intelligent system that learns enough about the properties of language to be able to understand and produce it, in the process it will also acquire historical cultural associations, some of which can be objectionable.

Already, popular online translation systems incorporate some of the biases we study. Further concerns may arise as AI is given agency in our society.

If machine-learning technologies used for, say, rsum screening were to imbibe cultural stereotypes, it may result in prejudiced outcomes.

The researchers said the AI was not to blame for such problematic effects.

Notice that the word embeddings know these properties of flowers, insects, musical instruments, and weapons with no direct experience of the world and no representation of semantics other than the implicit metrics of words co-occurrence statistics with other nearby words.

But changing the way AI learns would risk missing out on unobjectionable meanings and associations of words.

We have demonstrated that word embeddings encode not only stereotyped biases but also other knowledge, such as the visceralpleasantness of flowers or the gender distribution of occupations, the researchers wrote.

The study also implies that humans may develop prejudices partly because of the language they speak.

Our work suggests that behaviour can be driven by cultural history embedded in a terms historic use. Such histories can evidently vary between languages, the paper said.

Before providing an explicit or institutional explanation for why individuals make prejudiced decisions, one must show that it was not a simple outcome of unthinking reproduction of statistical regularities absorbed with language.

Similarly, before positing complex models for how stereotyped attitudes perpetuate from one generation to the next or from one group to another, we must check whether simply learning language is sufficient to explain (some of) the observed transmission of prejudice.

One of the researchers, Professor Joanna Bryson, of Princeton University, told The Independent that instead of changing the way AI learns, the way it expresses itself should be altered.

So the AI would still hearracism and sexism, but would have a moral code that would prevent it from expressing these same sentiments.

Such filters can be controversial. The European Union has passed laws to ensure the terms of AI filters are made public.

For Professor Bryson, the key finding of the research was not so much about AI but humans.

I think the most important thing here is we have understood more about how we are transmitting information, where words come from and one of the ways in which implicit biases are affecting us all, she said.

Link:

AI robots learning racism, sexism and other prejudices from humans, study finds - The Independent

Related Posts