Artificial Intelligence has to deal with its transparency problems – TNW

Posted: April 25, 2017 at 5:04 am

Artificial Intelligence breakthroughs and developments entail new challenges and problems. As AI algorithms grow more advanced, it becomes more difficult to make sense of their inner workings. Part of this is because the companies that develop them do not allow the scrutiny of their proprietary algorithms. But a lot of it has to do with the mere fact that AI is becoming opaque due to its increasing complexity.

And this can turn into a problem as we move forward and Artificial Intelligence becomes more prominent in our lives.

We've teamed up with Product Hunt to offer you the chance to win an all expense paid trip to TNW Conference 2017!

By a long range, AI algorithms perform much better than their human counterparts at the tasks they master. Self driving cars, for instance, which rely heavily on machine learning algorithms, will eventually reduce 90 percent of road accidents. AI diagnosis platforms spot early signs of dangerous illnesses much better than humans, and help save lives. And predictive maintenance can detect signs of wear in machinery and infrastructure in ways that are impossible for humans, preventing disasters and reducing costs.

But AI is not flawless, and does make mistakes, albeit at a lower rate than humans. Last year, the AI-powered opponent in the game Elite Dangerous went berserk and started creating super-weapons to hunt players. In another case, Microsofts AI chatbot Tay started spewing out racist comments within a day of its launch. And remember that time Google face recognition started making some offending labeling of pictures?

None of these mistakes are critical, and the damage can be shrugged off without much thought. However, neural networks, machine learning algorithms, and other subsets of AI are finding their way into more critical domains. Some of these fields include healthcare, transportation and law, where mistakes can have critical and sometimes fatal consequences.

We humans make mistakes all the time, including fatal ones. But the difference here is that we can explain the reasons behind our actions and bear the responsibility. Even the software we used before the age of AI was code and rule-based logic. Mistakes could be examined and reasoned out, and culpability could be well-defined.

The same cant be said of Artificial Intelligence. In particular, neural networks, which are the key component in many AI applications, are something of a black box. Often, not even the engineers can explain why their algorithm made a certain decision. Last year, Googles Go-playing AI stunned the world by coming up with moves that professionals couldnt think of.

As Nils Lenke, Senior Director of Corporate Research at Nuance, says about neural networks, Its not always clear what happens inside you let the network organize itself, but that really means it does organize itself: it doesnt necessarily tell you how it did it.

This can cause problems if those algorithms havefull control in making decisions. Who will be responsible if a self-driving car causes a fatal accident? You cant hold any of the passengers accountable for something they didnt control. And the manufacturers will have a hard time explainingan event that involves so many complexities and variables. And dont expect the car itself to start explaining its actions.

The same can be said of an AI application that has autonomous control over a patients treatment process. Or a risk assessment algorithm that decides whether convicts stay in prison or are free to go.

So can we trust Artificial Intelligence to make decisions on its own? For non-critical tasks, such as advertising, games and Netflix suggestions, where mistakes are tolerable, we can. But for situations where the social, legal, economic and political repercussion can be disastrous, we cant not yet. The same goes for scenarios where human lives are at stake. Were still not ready to forfeit control to the robots.

As Lenke says, [Y]ou need to look at the tasks at hand. For some, its not really critical if you dont fully understand what happens, or even if the network is wrong. A system that suggests music, for example: all that can go wrong is, you listen to boring piece of music. But with applications like enterprise customer service, where transactions are involved, or computer-assisted clinical documentation improvement, what we typically do there is, we dont put the AI in isolation, but we have it co-work with a human being.

For the moment Artificial Intelligence will show its full potential in complementing human efforts. Were already seeing inroads in fields such as medicine and cybersecurity. AI takes care of data-oriented research and analysis and presents human experts with invaluable insights and suggestions. Subsequently, the experts make the decisions and assume responsibility for the possible consequences.

In the meantime, firms and organization must do more to make Artificial Intelligence more transparent and understandable. An example is OpenAI, a nonprofit research company founded by Teslas Elon Musk and YCombinators Sam Altman. As the name suggests, OpenAIs goal is to open AI research and development to everyone, independent of financial interests.

Another organization, Partnership on AI, aims to raise awareness on and deal with AI challenges such as bias. Founded by tech giants including Microsoft, IBM and Google, the Partnership will also work on AI ethics and best practices.

Eventually, well achieve for better or worse Artificial General Intelligence, AI that is on par with the human brain. Maybe then, our cars and robot will be able to go to court and stand trial for their actions. But then, well be dealing with totally different problems.

Thats for the future. In the present, human-dominated world, to make critical decisions, you either have to be flawless or accountable. For the moment, AI falls within none of those categories.

Read next: Create effective and dazzling business plans with the help of Bizplan Premium just $69

Read the rest here:

Artificial Intelligence has to deal with its transparency problems - TNW

Related Posts