AI expert calls for end to use of ‘racially biased’ algorithms – www.computing.co.uk

Posted: December 13, 2019 at 3:24 pm

Decision algorithms are commonly infected with biases

Noel Sharkey, an expert in the field of artificial intelligence (AI), has urged the government to ban the use of all decision algorithms that impact on peoples' lives.

In an interview with theGuardian, Prof Sharkey said that such algorithms are commonly "infected with biases," and one should not expect them to make fair or trusted decisions.

"There are so many biases happening now, from job interviews to welfare to determining who should get bail and who should go to jail. It is quite clear that we really have to stop using decision algorithms," says the Sheffield University professor and robotics/AI pioneer, who is also a leading figure in a campaign against autonomous weapons.

"I am someone who has always been very light on regulation and always believed that it stifles innovation. But then I realised eventually that some innovations are well worth stifling, or at least holding back a bit. So I have come down on the side of strict regulation of all decision algorithms, which should stop immediately," he added.

According to Sharkey, all leading tech firms, such as Microsoft and Google, are aware of the algorithm bias problem and are also working on it, but none of them has so far come up with a solution.

Sharkey believes AI decision-making machines needs to be tested in the same way as any new pharmaceutical drug is before it is allowed on to the market.

By this, Sharkey means testing AI systems on at least hundreds of thousands, and preferably millions, of people, to eventually reach a point where the algorithm shows no major inbuilt bias.

To address this issue, US lawmakers introduced the Algorithmic Accountability Act in the lower House and Senate this April. The Act would require technology companies to ensure that their machine learning algorithms arefree of gender, race, and other biases before deployment.

If passed, the bill would require the Federal Trade Commission to create guidelines for assessing 'highly sensitive' automated systems. If companies find an algorithm implying the risk of privacy loss, they would take corrective actions to fix everything that is "inaccurate, unfair, biased or discriminatory" in the algorithm.

Last month Arvind Narayanan, an associate professor at Princeton, warned that most of the products or applications being sold today as AI are little more than "snake oil".

According to Narayanan, many companies have been using AI-based assessment systems to screen applicants. The majority of such systems claim to work not by analysing what the candidate said or wrote in their CV, but by speech pattern and body language.

"Common sense tells you this isn't possible, and AI experts would agree," Narayanan said.

According to Narayanan, the areas where AI use will remain fundamentally dubious include predicting criminal recidivism, forecasting job performance and predictive policing.

Arecent study by Oxford Internet Institute researchers suggested that current laws have largely failed to protect the public from biased algorithms that influence decision-making on everything from housing to employment.

The lead researcher of the study found many algorithms that were making conclusions about personal traits such as gender, ethnicity, religious beliefs and sexual orientation based on the individuals' browsing behaviour.

More here:

AI expert calls for end to use of 'racially biased' algorithms - http://www.computing.co.uk

Related Posts