Artificial Intelligence as a Weapon for Hate and Racism – Black Enterprise

Posted: March 17, 2017 at 7:19 am

A SXSW discussion cautions society about the dark side of rapidly advancing artificial intelligence technology

(Image: iStock.com/t.light)

The stunning advancement of artificial intelligence and machine learning has brought advances in society. These technologies have improved medicine and how quickly doctors can diagnose disease, for example. IBMs AI platform Watson helps reduce water waste in drought stricken areas. AI even entertains usthe more you use Netflix, the more it learns what youre viewing preferences are and makes suggestions based on what you like to watch.

However, there is a very dark side to AI, and its worrying many social scientists and some in the tech industry. These people say its even more troublesome that AI and machine learning are advancing so fast during these current times.

In an insightful session at SXSW, Kate Crawford, a principal researcher at Microsoft Research, offered some very disturbing scenarios with AI.

Just as we see AI advancing, something is happening; the rise of nationalism, of right-wing imperialism, and fascism, said Crawford. Its happening here in the U.S., but its also happening in Spain, Germany, in France[]The turn to authoritarianism is very different in every one of these countries, but as political scientists have pointed out, they have some shared characteristics: [] the desire to centralize power, to track populations and demonize outsiders, and to claim authority and neutrality without being held accountable.

How does AI factor into this? According to Crawford, AI is really, really good at centralizing power; at claiming a type of scientific neutrality without being transparent. And this matters, because we are witnessing the historic rise in an anti-democratic political logic.

Crawford pointed out an example of a startup that is using AI and facial recognition to detect terrorists faces. The startup is called Faception. She likens this use of AI to the pseudoscience of phrenologythe study of facial and skull features to determine personality traits. These kinds of debunked scientific practices were used to justify the mass murdering of Jews and slavery in the U.S., Crawford said.

I think its worrying were seeing these things from the past get a rerun in AI studies, Crawford told the audience. Essentially, AI phrenology is on the rise at the same time as the re-rise of authoritarianism. Because, even great tools can be misapplied and can be used to produce the wrong conclusions, and that can be disastrous, if used [by those] who want to centralize their power and erase their accountability.

Machines are increasingly being given the same kinds of tasks; to make certain predictions about segments of the population, often based on visual algorithms. During her discussion, Crawford demonstrated how visual algorithms can produce very incorrect and biased results. She refers to the data upon which this type of facial recognition/machine learning systems is based as human-trained.

Human-trained data contains all of our biases and stereotypes, she said. Crawford also said that AI and machine learning can be used in ways we dont even realize. Say, for example, a car insurer that wants to look at peoples Facebook posts. If [a person] is using exclamation marks [in their posts], the insurer might charge them [more] for their car insurance, because exclamations mean you are a little bit rash.

She said the biases and errors of AI get dangerous when they become intertwined into social institutions like the justice system. She cited problems with an emerging form of machine learning, predictive policing.

Police systems ingest huge amounts of historical crime data as a way of predicting where future crime might happen, where the hotspots will be, she explained. But, they have this unfortunate side effect; the neighborhoods that have had the worst policing in the past, are the ones that are coming out as the future hotspots each time. So, you end up in this viscous circle where the most policed areas [now] become the most policed areas in the future.

Crawford said that a study done on Chicagos predictive policing efforts showed that the technology was completely ineffective at predicting future crime. The only thing it did was increase harassment of people in hotspot areas.

She ended the discussion by stating the need for a new resistance movement that actively monitors and brings awareness of the ways in which AI can harm society, especially in the hands of dictators or those who would use the technology to manipulate others.

Read more:

Artificial Intelligence as a Weapon for Hate and Racism - Black Enterprise

Related Posts