Researchers were about to solve AI’s black box problem, then the lawyers got involved – The Next Web

Posted: December 18, 2019 at 8:44 pm

AI has a black box problem. We cram data in one side of a machine learning system and we get results out the other, but were often unsure what happens in the middle. Researchers and developers nearly had the issue licked, with explainable algorithms and transparent AI trending over the past few years. Then came the lawyers.

Black box AI isnt as complex as some experts make it out to be. Imagine you have 1,000,000 different spices and 1,000,000 different herbsand you only have a couple of hours to crack Kentucky Fried Chickens secret recipe. Youre pretty sure you have all the ingredients but youre not sure which eleven herbs and spices you should use. You dont have time to guess, and it would take billions of years or more to manually try every combination. This problem cant realistically be solved using brute force, at least not under normal kitchen paradigms.

But imagine if you had a magic chicken fryer that did all the work for you in seconds. You could pour all your ingredients into it and then give it a piece of KFC chicken to compare against. Since a chicken fryer cant taste chicken, it would rely on your taste-buds to confirm whether itd managed to recreate the Colonels chicken or not.

It spits out a drumstick, you take a bite and tell the fryer whether the piece youre eating now tastes more or less like KFCs than the last one you tried. The fryer goes back to work, tries more combinations, and keeps going until you tell it to stop once it has the recipe right.

Thats basically how black box AI works. You have no idea how the magic fryer came up with the recipe maybe it used 5 herbs and 6 spices, maybe it used 32 herbs and 0 spices but, it doesnt matter. All we care about is using AI as a way to do something humans could do, but much faster.

This is fine when were using blackbox AI to determine whether something is a hotdog or not, or when Instagram uses it to determine if youre about to post something that might be offensive. Its not fine when we cant explain why an AI sentenced a black man with no priors to more time than a white man with a criminal history for the same offense.

The answer is transparency. If there is no black box, then we can tell where things went wrong. If our AI sentences black people to longer prison terms than white people because its over-reliant on external sentencing guidance, we can point to that problem and fix it in the system.

But theres a huge downside to transparency: If the world can figure out how your AI works, it can figure out how to make it work without you. The companies making money off of black box AI especially those like Palantir, Facebook, Amazon, and Google who have managed to entrench biased AI within government systems dont want to open the black box anymore than they want their competitors to have access to their research. Transparency is expensive and, often, exposes just how unethical some companies use of AI is.

As legal expert Andrew Burt recently wrote in Harvard Business Review:

To start, companies attempting to utilize artificial intelligence need to recognize that there are costs associated with transparency. This is not, of course, to suggest that transparency isnt worth achieving, simply that it also poses downsides that need to be fully understood. These costs should be incorporated into a broader risk model that governs how to engage with explainable models and the extent to which information about the model is available to others.

The AI gold rush of the 2010s led to a Wild West situation where companies can package their AI any way they want, call it whatever they want, and sell it in the wild without regulation or oversight. Companies that have made millions or billions selling products and services related to biased, black box AI have managed to entrench themselves in the same position as the health insurance and fossil fuel industries. Their very existence is threatened by the idea that they may be regulated against doing harm to the greater good.

Simply put: No. The lawyers will make sure well never know any more about why a commercial system is biased, even if we develop fully transparent algorithms, than if these systems remain in black boxes. As Axios Kaveh Waddell recently wrote:

Companies are tightening access to their AI algorithms, invoking intellectual property protections to avoid sharing details about how their systems arrive at critical decisions.

The calculus for the AI industry is the same as the private healthcare industry in the US. Extricating biased black box AI from the world would probably put dozens of companies out of business and likely result in hundreds of billions of dollars lost. The US industrial law enforcement complex runs on black box AI were unlikely to see the government end its deals with Microsoft, Palantir, and Amazon any time soon. So long as the lawmakers are content to profit from the use of biased, black box AI, itll remain embedded in society.

And we also cant rely on businesses themselves to end the practice. Our desire to extricate black box systems simply means companies cant blame the algorithm anymore, so theyll hide their work entirely. With transparent AI, well get opaque developers. Instead of choosing not to develop dual use, or potentially dangerous AI, theyll simply lawyer up.

As Burt puts it in his Harvard Business Review article:

Indeed, this is exactly why lawyers operate under legal privilege, which gives the information they gather a protected status, incentivizing clients to fully understand their risks rather than to hide any potential wrongdoings. In cybersecurity, for example, lawyers have become so involved that its common for legal departments to manage risk assessments and even incident-response activities after a breach. The same approach should apply to AI.

When things go wrong and AI runs amok, the lawyers will be there to tell us the most company-friendly version of what happened. Most importantly, theyll protect companies from having to share how their AI systems work.

Were trading a technical black box for a legal one. Somehow, this seems even more unfair.

Read next: The super-rare Nintendo Play Station prototype is going to auction

View original post here:

Researchers were about to solve AI's black box problem, then the lawyers got involved - The Next Web

Related Posts