How to Keep Your AI From Turning Into a Racist Monster – WIRED

Slide: 1 / of 1. Caption: Getty Images

Working on a new product launch? Debuting a new mobile site? Announcing a new feature? If youre not sure whether algorithmic bias could derail your plan, you should be.

About

Megan Garcia (@meganegarcia) is a senior fellow and director of New America California, where she studies cybersecurity, AI, and diversity in technology.

Algorithmic biaswhen seemingly innocuous programming takes on the prejudices either of its creators or the data it is fedcauses everything from warped Google searches to barring qualified women from medical school. It doesnt take active prejudice to produce skewed results (more on that later) in web searches, data-driven home loan decisions, or photo-recognition software. It just takes distorted data that no one notices and corrects for.

It took one little Twitter bot to make the point to Microsoft last year. Tay was designed to engage with people ages 18 to 24, and it burst onto social media with an upbeat hellllooooo world!! (the o in world was a planet earth emoji). But within 12 hours, Tay morphed into a foul-mouthed racist Holocaust denier that said feminists should all die and burn in hell. Tay, which was quickly removed from Twitter, was programmed to learn from the behaviors of other Twitter users, and in that regard, the bot was a success. Tays embrace of humanitys worst attributes is an example of algorithmic biaswhen seemingly innocuous programming takes on the prejudices either of its creators or the data it is fed.

Tay represents just one example of algorithmic bias tarnishing tech companies and some of their marquis products. In 2015, Google Photos tagged several African-American users as gorillas, and the images lit up social media. Yonatan Zunger, Googles chief social architect and head of infrastructure for Google Assistant, quickly took to Twitter to announce that Google was scrambling a team to address the issue. And then there was the embarrassing revelation that Siri didnt know how to respond to a host of health questions that affect women, including, I was raped. What do I do? Apple took action to handle that as well after a nationwide petition from the American Civil Liberties Union and a host of cringe-worthy media attention.

One of the trickiest parts about algorithmic bias is that engineers dont have to be actively racist or sexist to create it. In an era when we increasingly trust technology to be more neutral than we are, this is a dangerous situation. As Laura Weidman Powers, founder of Code2040, which brings more African Americans and Latinos into tech, told me, We are running the risk of seeding self-teaching AI with the discriminatory undertones of our society in ways that will be hard to rein in, because of the often self-reinforcing nature of machine learning.

As the tech industry begins to create artificial intelligence, it risks inserting racism and other prejudices into code that will make decisions for years to come. And as deep learning means that code, not humans, will write code, theres an even greater need to root out algorithmic bias. There are four things that tech companies can do to keep their developers from unintentionally writing biased code or using biased data.

The first is lifted from gaming. League of Legends used to be besieged by claims of harassment until a few small changes caused complaints to drop sharply. The games creator empowered players to vote on reported cases of harassment and decide whether a player should be suspended. Players who are banned for bad behavior are also now told why they were banned. Not only have incidents of bullying dramatically decreased, but players report that they previously had no idea how their online actions affected others. Now, instead of coming back and saying the same horrible things again and again, their behavior improves. The lesson is that tech companies can use these community policing models to attack discrimination: Build creative ways to have users find it and root it out.

Second, hire the people who can spot the problem before launching a new product, site, or feature. Put women, people of color, and others who tend to be affected by bias and are generally underrepresented in tech companies development teams. Theyll be more likely to feed algorithms a wider variety of data and spot code that is unintentionally biased. Plus there is a trove of research that shows that diverse teams create better products and generate more profit.

Third, allow algorithmic auditing. Recently, a Carnegie Mellon research team unearthed algorithmic bias in online ads. When they simulated people searching for jobs online, Google ads showed listings for high-income jobs to men nearly six times as often as to equivalent women. The Carnegie Mellon team has said it believes internal auditing to beef up companies ability to reduce bias would help.

Fourth, support the development of tools and standards that could get all companies on the same page. In the next few years, there may be a certification for companies actively and thoughtfully working to reduce algorithmic discrimination. Now we know that water is safe to drink because the EPA monitors how well utilities keep it contaminant-free. One day we may know which tech companies are working to keep bias at bay. Tech companies should support the development of such a certification and work to get it when it exists. Having one standard will both ensure sectors sustain their attention to the issue and give credit to the companies using commonsense practices to reduce unintended algorithmic bias.

Companies shouldnt wait for algorithmic bias to derail their projects. Rather than clinging to the belief that technology is impartial, engineers and developers should take steps to ensure they dont accidentally create something that is just as racist, sexist, and xenophobic as humanity has shown itself to be.

Read the original:

How to Keep Your AI From Turning Into a Racist Monster - WIRED

Related Posts

Comments are closed.