We Still Know Very Little About How AI Thinks – Futurism

Posted: May 28, 2017 at 7:42 am

In BriefAI is becoming more and more ubiquitous, with reports ofadvancements or new applications coming almost daily. How much dowe know about how it thinks, and how are we trying to find outmore? AI as We Understand It

Most of the AI we know today operates on a principle of deep learning: a machine is given a set of data and a desired output, and from that it produces its own algorithm to solve it. The system then repeats, perpetuating itself. This is called a neural network. It is necessary to use this method to create AI, as a computer can code faster than a human; it would take lifetimes to code it manually.

Professor of Electrical Engineering and Computer Science at MIT Tommi Jaakkola says, If you had a very small neural network, you might be able to understand it. But once it becomes very large, and it has thousands of units per layer and maybe hundreds of layers, then it becomes quite un-understandable. We are at the stage of these large systems now. So, in order to make these machines explain themselves an issue that will have to be solved before we can place any trust in them what methods are we using?

1. Reversing the algorithms.Inimage recognition, this involves programming the machine to produce or modify pictures when the computer recognizes a pattern it has learned. Take the example of a Deep Dream modification of The Creation of Adam, where the AI has been told to put dogs in where it recognizes them. From this, we can learn what constitutes a dog for the A.I: firstly, it only produces heads (meaning this is what largely characterizes a dog, according to it) and secondly, the patterns that the computer recognizes as dogs are clustered around Adam (on the left) and God (on the right).

2. Identifying the data it has used. This process of understanding AI gives AI the command to record extracts and highlight the sections of text that it has used according to the pattern it was told to recognize. Developed first by Regina Barzilay, a Delta Electronics Professor at MIT, this type of understanding applies to AIs that search for patterns in data and make predictions accordingly. Carlos Guestrin, a Professor of Machine Learning at the University of Washington, has developed a similar system that presents the data with a short explanation as to why it was chosen.

3. Monitoring individual neurons. Developed by Jason Yosinski, a Machine Learning Researcher at Uber A.I Labs, this involves using a probe and measuring which image stimulates the neuron the most. This allows us to deduce what the AI looks for the most through a process of deduction.

These methods, though, are proving largely ineffective;as Guestrin says, We havent achieved the whole dream, which is where AI has a conversation with you, and it is able to explain. Were a long way from having truly interpretable AI.

It is important to understand how these systems work, as they are already being applied to industries including medicine, cars, finance, and recruitment: areas that have fundamental impacts on our lives. To give this massive power to something we dont understand could be a foolhardy exercise in trust. This is, of course, providing that the AI is honest, and does not suffer from the lapses in truth and perception that humans do.

At the heart of the problem with trying to understand the machines is a tension.If we could predict them perfectly, it would rob AI of the autonomous intelligence that characterizes it. We must remember that we dont know how humans make these decisions either; consciousness remains a mystery, and the world remains an interesting place because of it.

Daniel Dennet warns, though, that one question needs to be answered before AI is introduced: What standards do we demand of them, and of ourselves? How will we design the machines that will soon control our world without us understanding them how do we code our gods?

More here:

We Still Know Very Little About How AI Thinks - Futurism

Related Posts