Pre-crime, algorithms, artificial intelligence, and ethics – Network World

Posted: February 26, 2017 at 11:17 pm

For more than 30 years, Gibbs has advised on and developed product and service marketing for many businesses and he has consulted, lectured, and authored numerous articles and books.

I just binge-listened to an outstanding podcast, LifeAfter, which, without giving too much away, is about artificial intelligence and its impact on people. Here's the show's synopsis:

When you die in the digital age, pieces of you live on forever. In your emails, your social media posts and uploads, in the texts and videos youve messaged, and for some even in their secret online lives few even know about. But what if that digital existence took on a life of its own? Ross, a low level FBI employee, faces that very question as he starts spending his days online talking to his wife Charlie, who died8 months ago

The ethical issues that this podcast raises are fascinating and riff on some of the AI-related issues we're starting to appreciate.

One of the big issues in the real world we're just getting to grips with lies in the way we humans create intelligent systems because whoever does the design and coding brings their own world views, biases, misunderstandings, and, most crucially, prejudices to the party.

A great example of this kind of problem in current AI products was discussed in a recent Quartz article, We tested bots like Siri and Alexa to see who would stand up to sexual harassment. The results of this testing are fascinating and, to some extent, predictable:

ApplesSiri, AmazonsAlexa, MicrosoftsCortana, and GooglesGoogle Homepeddle stereotypes of female subserviencewhich puts theirprogressiveparent companies in a moral predicament The message is clear: Instead of fighting back against abuse, each bot helps entrench sexist tropes through their passivity.

Now some AI apologists might argue that we're in the earliest days of this technology and the scope of what is required to deliver a general-purpose interactive digital assistant is still being explored so weaknesses and oversights are to be expected and will be fixed, all in good time. Indeed, given the sheer magnitude of the work, this argument doesn't, on the face of it, seem unreasonable but the long-term problem is to what extent these deficiencies will become "baked-in" to these products such that they can never be wholly fixed and subtle bias on a topic or position is often more effective in reinforcing belief and behavior than explicit support. Moreover, given that humans prefer to have their prejudices affirmed and supported and that to be really effective their digital assistants will have to learn what their masters want and expect, there's a risk of self-reinforcing feedback.

The danger of baked-in acceptance and even support of sexist tropes is obviously bad in intelligent assistants but when AI is applied to life-changing real-world problems, the subtlest built-in bias will become dangerous. How dangerous? Consider the non-AI, statistics-based algorithms that have for some years been used to derive "risk assessments" of criminals as discussed in Pro Publica's article Machine Bias, published last year. These algorithmic assessments what are, essentially, "predictive policing" (need I mention "pre-crime"?) determine everything from whether someone can get bail and for how much, to how harsh their sentence will be.

[Pro Publica] obtained the risk scores assigned to more than 7,000 people arrested in Broward County, Florida, in 2013 and 2014 and checked to see how many were charged with new crimes over the next two years, thesame benchmark usedby the creators of the algorithm.

The score proved remarkably unreliable in forecasting violent crime: Only 20 percent of the people predicted to commit violent crimes actually went on to do so.

When a full range of crimes were taken into account including misdemeanors such as driving with an expired license the algorithm was somewhat more accurate than a coin flip. Of those deemed likely to re-offend, 61 percent were arrested for any subsequent crimes within two years.

That's bad enough but a sadly predictable built-in bias was revealed:

In forecasting who would re-offend, the algorithm made mistakes with black and white defendants at roughly the same rate but in very different ways.

The impetus to use algorithms to handle complex, expensive problems in services such as the cash-strapped court system is obvious and even when serious flaws are identified in these systems, there's huge opposition to stopping their use because these algorithms give the illusion of solving a high-level system problems (consistency of judgments, cost, and speed of process) even though the consequences to individuals (disproportionate loss of freedom) are clear to everyone and life-changing for those affected.

Despite these well-known problems with risk assessment algorithms there's absolutely no doubt that AI-based solutions that rely on Big Data and deep learning are destined to become de rigueur and the biases and prejudices baked-in to those systems will be much harder to spot.

Will these AI systems be more objective than humans in quantifying risk and determining outcomes? Is it fair to use what will be alien intelligences to determine the course of people's lives?

My fear is that the sheer impenetrability of AI systems, the lack of understanding by those who will use them, and the "Wow factor" of AI will make their adoption not an "if" but a "when" that will be much closer than we might imagine and the result will be a great ethical void that will support even greater discrimination, unfair treatment, and expediency in an already deeply flawed justice system.

We know that this is a highly likely future. What are we going to do about it?

Comments? Thoughts? Drop me a line then follow me on Twitter and Facebook and sign up for my newsletter!

Link:

Pre-crime, algorithms, artificial intelligence, and ethics - Network World

Related Posts