The race problem with AI: Machines are learning to be racist’ – Metro.co.uk

Artificial intelligence (AI) is already deeply embedded in so many areas of our lives. Societys reliance on AI is set to increase at a pace that is hard to comprehend.

AI isnt the kind of technology that is confined to futuristic science fiction movies the robots youve seen on the big screen that learn how to think, feel, fall in love, and subsequently take over humanity. No, AI right now is much less dramatic and often much harder to identify.

Artificial intelligence is simply machine learning. And our devices do this all the time. Every time you input data into your phone, your phone learns more about you and adjusts how it responds to you. Apps and computer programmes work the same way too.

Any digital programmes that display learning, reasoning or problem solving, are displaying artificial intelligence. So, even something as simple as a game of chess on your desktop counts as artificial intelligence.

The problem is that the starting point for artificial intelligence always has to be human intelligence. Humans programme the machines to learn and develop in a certain way which means they are passing on their unconscious biases.

The tech and computer industry is still overwhelmingly dominated by white men. In 2016, there were ten large tech companies in Silicon Valley the global epicentre for technological innovation that did not employ a single black woman. Three companies had no black employees at all.

When there is no diversity in the room, it means the machines are learning the same biases and internal prejudices of the majority white workforces that are developing them.

And, with a starting point that is grounded in inequality, machines are destined to develop in ways that perpetuate the mistreatment of and discrimination against people of colour. In fact, we are already seeing it happen.

In 2017, a video went viral of social media of a soap dispenser that would only automatically release soap onto white hands.

The dispenser was created by a companycalledTechnical Concepts, and the flaw occurred because no one on the development team thought to test their product on dark skin.

A study in March last year found that driverless cars are more likely to drive into black pedestrians, again because their technology has been designed to detect white skin, so they are less likely to stop for black people crossing the road.

It would be easy to chalk these high-profile viral incidents up as individual errors, but data and AI specialist Mike Bugembe, says it would be a mistake to think of these problems in isolation. He says they are indicative of a much wider issue with racism in technology, one that is likely to spiral in the next few years.

I can give you so many examples of where AI has been prejudiced or racist or sexist, Mike tells Metro.co.uk.

The danger now is that we are actually listening and accepting the decisions of machines. When computer says no, we increasingly accept that as gospel. So, were listening now to something that is perpetuating, or even accentuating the biases that already exist in society.

Mike says the growth of AI can have much bigger, systemic ramifications for the lives of people of colour in the UK. The implications of racist technology go far beyond who does and who doesnt get to use hand soap.

AI is involved in decisions about where to deploy police officers, in deciding who is likely to take part in criminial activity and reoffend. He says in the future we will increasingly see AI playing a part in things like hospital admissions, school exclusions and HR hiring processes.

Perpetuating racism in these areas has the potential to cause serious, long-lasting harm to minorities. Mike says its vital that more black and minority people enter this sector to diversify the pool of talent and help to eradicate the problematic biases.

If we dont have a system that can see us and give us the same opportunities, the impact will be huge. If we dont get involved in this industry, our long-term livelihoods will be impacted, explains Mike.

Its no secret that within six years, pretty much 98% of human consumer transactions will go through machines. And if these machines dont see us, minorities, then everything will be affected for us. Everything.

An immediate concern for many campaigners, equality activists and academics is the deployment and roll out of facial recognition as a power for the police.

In February, the Metropolitan Police began operational use of facial recognition CCTV, with vans stationed outside a large shopping centre in east London, despite widespread criticism about the methods.

A paper last year found that using artificial intelligence to fight crime could raise the risk of profiling bias. The research warned that algorithms might judge people from disadvantaged backgrounds as a greater risk.

Outside of China, the Metropolitan police is the largest police force outside of China to roll it out, explains Kimberly McIntosh, senior policy officer at Runnymede Trust. We all want to stay safe but giving the green light to letting dodgy tech turn our public spaces into surveillance zones should be treated cautiously.

Kimberly points to research that shows that facial recognition software has trouble identifying the faces of women and black people.

Yet roll outs in areas like Stratford have significant black populations, she says.There is currently no law regulating facial recognition in the UK. What is happening to all that data?

93% of the Mets matches have wrongly flagged innocent people. The Equality and Human Rights Commission is right the use of this technology should be paused. It is not fit for purpose.

Kimberlys example shows how the inaccuracies and inherent biases of artificial intelligence can have real-world consequences for people of colour in this case, it is already contibuting to their disproportionate criminalisation.

The ways in which technological racism could personally and systemically harm people of colour are numerous and wildly varied.

Racial bias in technology already exists in society, even in the smaller, more innocuous ways that you might not even notice.

There was a time where if you typed black girl into Google, all it would bring up was porn, explains Mike.

Google is a trusted source of information, so we cant overstate the impact that search results like these have on how people percieve the world and minorities. Is it any wonder that black women are persistantly hypersexualised when online search results are backing up these ideas?

Right now, if you Google cute baby, you will only see white babies in the results. So again, there are these more pervasive messages being pushed out there that speak volumes about the worth and value of minorities in society.

Mike is now raising money to gather data scientists together for a new project. His aim is to train a machine that will be able to make sure other machines arent racist.

We need diversity in the people creating the algorithms. We need diversity in the data. And we need approaches to make sure that those biases dont carry on, says Mike. So, how do you teach a kid not to be racist? The same way you will teach a machine not to be racist, right?

Some companies say to be well, we dont put race in our feature set which is the data used to train the algorithms. So they think it doesnt apply to them. But that is just as meaningless and unhelpful as saying they dont see race.

Just as humans have to acknoweldge race and racism in order to beat it, so too do machines, algorithm and artificial intelligence.

If we are teaching a machine about human behaviour, it has got to include our prejudices, and strategies that spot them and fight against them.

Mike says that discussing racism and existing biases can be hard for people with power, particuarly when their companies have a distinct lack of employees with relevant lived experiences. But he says making it less personal can actually make it easier for companies to address.

The current definition of racism is very individual and very easy to shrug off people can so easily say, Well, thats not me, Im not racist, and thats the end of that conversation, says Mike.

If you change the definition of racism to a pattern of behaviour like an algorithm itself thats a whole different story. You can see what is recurring, the patterns than pop up. Suddenly, its not just me thats racist, its everything. And thats the way it needs to be addressed on a wider scale.

All of us are increasingly dependent on technology to get through our lives. Its how we connect with friends, pay for food, order new clothes. And on a wider scale, technology already governs so many of our social systems.

Technology companies must ensure that in this race towards a more digital-led world, ethnic minorities are not being ignored or treated as collateral damage.

Technological advancements are meaningless if their systems only serve to uphold archaic prejudices.

This series is an in-depth look at racism in the UK in 2020.

We aim to look at how, where and why racist attitudes and biases impact people of colour from all walks of life.

It's vital to improve the language we have to talk about racism and start the difficult conversations about inequality.

We want to hear from you - if you have a personal story or experience of racism that you would like to share get in touch: metrolifestyleteam@metro.co.uk

MORE: Muslims are scared of going to therapy in case theyre linked to terrorism

MORE: How the word woke was hijacked to silence people of colour

MORE: Black women are being targeted with disgusting misogynoir in online gaming forums

Read this article:
The race problem with AI: Machines are learning to be racist' - Metro.co.uk

Related Posts
This entry was posted in $1$s. Bookmark the permalink.