Our fear of artificial intelligence? It is all too human – San Francisco Chronicle

Posted: April 23, 2017 at 12:53 am

The classic sci-fi fear that robots will intellectually outpace humans has resurfaced now that artificial intelligence is part of our daily lives. Today artificially intelligent programs deliver food, deposit checks and help employers make hiring decisions. If we are to worry about a robot takeover, however, it is not because artificial intelligence is inhuman and immoral, but rather because we are coding-in distinctly human prejudice.

Last year, Microsoft released an artificially intelligent Twitter chatbot named Tay aimed at engaging Millennials online. The idea was that Tay would spend some time interacting with users, absorb relevant topics and opinions, and then produce its own content. In less than 24 hours, Tay went from tweeting humans are super cool to racist, neo-Nazi one-liners, such as: I f hate n, I wish we could put them all in a concentration camp with kikes and be done with the lot. Needless to say, Microsoft shut down Tay and issued an apology.

We need to hold the companies who make our AI-enabled devices accountable to a standard of ethics.

As the Tay disaster revealed, artificial intelligence does not always distinguish between the good, the bad and the ugly in human behavior. The type of artificial intelligence frequently used in consumer products is called machine learning. Before machine learning, humans analyzed data, found a pattern and wrote an algorithm (like a step-by-step recipe) for the computer to use. Now, we feed the computer huge amounts of data points, and the computer itself can spot the pattern then write the algorithm for itself to follow.

For example, if we wanted the artificial intelligence to correctly identify cars, then wed teach it what cars looked like by giving it lots pictures of cars. If all the pictures we chose happened to be red sedans, then the artificial intelligence might think that cars, by definition, are red sedans. If we then showed the artificial intelligence a picture of a blue sports utility vehicle, it might determine it wasnt a car. This is all to say that the accuracy of AI-powered technology depends on the data we use to teach it.

When there is bias in the data used to train artificial intelligence, there is bias in its output.

AI-controlled online advertising is almost six times more likely to show high-paying job posts to men than to women. An AI-judged beauty contest found white women most attractive. Artificially intelligent software used in court to help judges set bail and parole sentences also showed racial prejudice. As ProPublica reported, The formula was particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants. It is not that the algorithm is inherently racist its that it was fed stacks of court filings that were harsher on black men than on white. In turn, the artificial intelligence learned to call black defendants criminals at an unfairly higher rate, just like a human might.

That algorithm-fueled artificial intelligence amplifies human bias should make us wary of Silicon Valleys claim that this technology will usher in a better future.

Even when algorithms are not involved, old-fashioned assumptions make their way into the newest gadgets. I walked into room the other day to a man yelling, Alexa, find my phone! only later to realize he was talking to his Amazon Alexa robot personal assistant, not a human female secretary. It is no coincidence that all the AI personal assistants Apples Siri, Microsofts Cortana and Amazons Alexa marketed to perform to traditionally female tasks, default to female voices. What is disruptive about that?

Some have suggested that AIs bias problem stems from the homogeneity of the people making the technology. Silicon Valleys top tech firms are notoriously white-male dominated, and hire fewer women and people of color than the rest of the business sector. Companies such as Uber and Tesla have gained reputations for corporate culture hostile to women and people of color. Google was sued in January by the Department of Labor for failing to provide compensation data, and then charged with underpaying its female employees (Google is federally contracted and must hire in accordance with federal law). There is no question that there should be more women and people of color in tech. But adding diversity to product teams alone will not counteract the systemic nature of the bias in data used to train artificial intelligence.

Careful attention to how artificial intelligence learns will require placing antibias ethics at the center of tech companies operating principles not just an after-the-fact inclusion measure mentioned on the company website. This ethical framework exists in other fields medicine, law, education, government. Training, licensing, ethics boards, legal sanctions and public opinion coalesce to establish standards of practice. For instance, medical doctors are taught the Hippocratic oath and agree to uphold certain ethical practices or lose their licenses. Why cant tech have a similar ethical infrastructure?

Perhaps ethics in tech did not matter as much when the products were confined to calculators, video games and iPods. But now that artificial intelligence makes serious, humanlike decisions, we need to hold it to humanlike moral standards and humanlike laws. Otherwise, we risk building a future that looks just like our past and present.

Madeleine Chang is a San Francisco Chronicle staff writer. Email: mchang@sfchronicle.com Twitter: @maddiechang

Original post:

Our fear of artificial intelligence? It is all too human - San Francisco Chronicle

Related Posts