Artificial intelligence expert: True artificial intelligence should also have a consciousness, but we are far from that – The Slovak Spectator

Posted: October 31, 2019 at 5:48 am

Artificial intelligence is an issue that has gained much popularity in the past few years.

This is also evident in the number of technologies referring to artificial intelligence (AI). Autonomous cars and personal assistants like Apples Siri are often spoken about, while machine learning, deep learning and neural networks are frequently featured in written text. What do these terms mean, and what is the difference between them? How far has technology based on elements of AI progressed? We discussed these topics in a series of interviews with Juraj Jnok, an expert on artificial intelligence of the ESET company.

If we can simulate human intelligence, consciousness and thinking with some technology, we achieve artificial intelligence. There is a term for it - artificial general intelligence - but there is also a concept called super intelligence. While artificial general intelligence (AGI) is meant to imitate human thinking, including its faults, super intelligence (SI) should go even further and exceed the limits of human consciousness and thinking, and considerably surpass them. However, there are more philosophical discourses involved, and we have to admit that currently, we are still far behind, even in the development of AGI.

These terms are frequently confused, even by professionals. Simply put, artificial intelligence is an umbrella notion. It includes a wide range of topics that also cover the issues of robotics, machine learning and so on. Thus, machine learning is just one sphere of AI, and currently, it is probably gaining the most attention. Deep learning, on the other hand, is just one part of machine learning. This sphere is inspired by how the brain functions and tries to simulate the connection between neurons in the brain.

The idea of machine learning is quite simple. We have a lot of data available, and through ML, we want to make a compact representation of it. This means that if I have a huge amount of data, I do not have to sort through it all on my own. It is enough for me to take a smaller sample, assort it and use an algorithm on it to in order to assign it the basic sorting/classification. Then, I let the learned algorithm work on another, smaller sample, and watch if it sorts it out according to my wish. If not, I adjust its behaviour, for example by specifying criteria. If I am satisfied with the algorithms performance, I use it for the whole database, and the algorithm sorts it on its own in a much shorter time than any human would manage.

For example, if we want to teach a computer how to distinguish a cup, we load thousands or millions of photos of cups and glasses. Of these pictures, the algorithm tries to create some sort of generalisation on its own. Then, when I show it a new photo of a cup, it will be able to tell what is the probability that this is a cup. If I am not content with the results, I can adjust the criteria, for example, by telling it the object is a cup and so on.

Currently, when AI is mentioned, it is machine learning that is talked about the most. It already functions on a regular basis by, for example, recommending users programmes on Netflix based on the programmes they have already seen. Mobile phones that categorise photographs, autonomous cars and cyber-security are also examples of machine learning we engage in. Right now, the biggest discussion in AI revolves around machine learning; global companies like Google and Apple are investing massively in these technologies.

Deep learning also interprets bulks of data, of which we need to make a compact representation. This is called a model, which will then make predictions. However, we will not use tree algorithms but rather neural networks.

Neural networks are inspired by how the human brain works, by the functioning of neurons. The brain is basically a huge network of neurons, which is entered through some inputs. These inputs are evaluated in the brain, and then the brain gives sends the outputs into our organism. The neural network works the same. We have some inputs that enter the network. The network assigns a certain significance to the entries, evaluates them, and then returns the outputs to us.

Let us try, for instance, to explain it through the example of a decision tree, which is a common classification algorithm in ML. In a decision tree, each decision takes me to another one, followed by another, similar to a tree growing. Either I climb to one branch, or to another, and then I face the next branch, the next layer. AI during machine learning works in a similar way: either this, or that etc., round and round.

By contrast, when it comes to neural networks, this impulse enters something we can imagine as a network and crosses it by passing several layers simultaneously, or can even return back. There are even neural networks with cells that decide on what I will use in this situation and what I will dump, but I will remember it and can use it later. So, it is closer to the real functioning of the brain, even though this is not an exact copy of how the brain works, of course.

Simply put, yes. This also implies a fundamental difference, which concerns the interpretability. In the decision tree, I can find out retroactively why the AI decided the way it did. I can look back at individual steps and evaluate in which step it decided in which way. With neural networks, this is not so simple, as the path of the impulse is not direct. The impulse is evaluated many times, has a certain weight allocated, and the algorithm creates a generalisation. But I cannot say why it allocated certain weights to individual neurons and determine why it decided the way it did. This is a big problem when applying these technologies on decisions involving humans, as we cannot say clearly why the due model has decided in a certain way.

Yes. For example, in the banking sector, AI is used to evaluate a clients creditworthiness. This is a very sensitive issue, in that people want to know why the bank has not approved their credit application. Hardly anyone wishes to hear that it was artificial intelligence that decided on this, and, moreover, we are unable to explain why.

Apart from this, there is also the issue of input data. AI can learn incorrect generalisations based on the data available, for example racism. Statistically, the input data may imply that there is higher probability of a specific group of the population not repaying a mortgage. An incorrect selection of data can lead to prejudice in the decision-making process of the resulting model. However, we try to prevent this in our work.

Technically, we could already apply machine learning and deep learning in banking, but this has not been done on a mass scale for the abovementioned reasons.

Yes, there are still some other ways. Many algorithms used today are old: some were established back in the 1950s, or even earlier. Basically, everything we draw from today goes back to the 1950s, 1960s, or 1970s. The last considerable innovations date back to the 1980s and 1990s. Since then, we essentially only improve what has already been invented. Or, we have found practical use for algorithms which were only on paper.

Paradoxically, the development of ML was aided by computer games as it has been used to increase computing performance, especially in graphic chips. Another factor was the arrival of technologies associated with big data and big-capacity, and fast repositories. Until then, there was also a problem with databases. The development of these two spheres, i.e. computing performance and databases, has led to the current situation: most companies focus exactly on Machine Learning, which inevitably needs their assistance. But there are definitely more ways.

Well, not just the games, but we owe them considerable credit in this. When we split machine learning into a complete mathematical extreme, it is basically composed of matrixes. Simply put, we need to calculate decimal numbers and multiply in huge quantities. This is what effectively happens on the computing level. And in this, computers are much better than humans. Exactly the same happens with graphic cards when describing the environment in which the games are played, or when watching an HD video. Basically, these are the same mathematical operations.

Of course, algorithms have been developing and improving, and new ones have even appeared. So, you cannot say that we have made zero progress. However, it is true that even the current models of AI, commonly used in todays products, were generated in the 1990s. However, the development of machine learning, widely popular today, occurred thanks to the development of the technologies mentioned above. When it comes to ideas, we have not moved on fundamentally in the past 20 30 years. I have not seen any revolutionary idea that would considerably change the development of AI. We all rather work on the fundaments laid in the past, and we are improving them.

Let us use the very popular algorithm LSTM, or long short-term memory, as an example. This is a type of deep learning based on neural networks, which is used to process sequential data, i.e. mainly to process images and sound. It is exactly this algorithm that is used when creating fake videos, so-called deep fakes, which are very popular now, too. The video with Barack Obama spread around the world and in which he says something he never said in reality, was produced in part with an algorithm invented by two people at Graz University back in 1997. However, the algorithm didntt become popular until recently. In the 1990s, most people and companies could not afford a computing device that would manage such a performance. Today, it is much more available.

There is an older approach that is effective and is often used in robotics or in managing in industry; these are genetic algorithms. This approach is inspired by replication and the division of cells, which sometimes involve mutations. Similarly, with an algorithm we define some population, we change the functions in time, and watch how these changes, or mutations, change the results.

I can use a slightly bizarre example from the stock exchange. In order to predict how the development of stock exchange will look overtime, we need to follow numerous indices. We seek the right function for when it is the best moment to conclude a deal, to sell shares, etc. For this, we need to follow many parameters and search for balance between them depending on previous data. Thus, we create a function, enter input data, and detect how the whole system functions and how it changes. Afterwards, we make changes to this function, i.e. the mutations, and watch how the system has changed and how it is developing. We do this again and again, until we find a suitable model, which will at least partially represent reality.

There is an approach called good-old-fashioned AI. In Slovak, it is usually defined as an expert system. In this case, an expert who understands AI defines fixed rules, according to which the programme will behave. These rules can change in the process, but they will again be changed by the expert, not by the AI itself. Thus, this is human supervision of artificial intelligence.

Another popular approach is represented by Markov chains, whose fundaments were laid by Russian mathematician Andrey Markov in the beginning of the 20th century, and nowadays, are widely used as statistical models of real processes. They are used in robotics and finances to optimise the queues at airports, as well as in the PageRank algorithm of the Google browser. These methods have become the base for the area of machine learning known as reinforcement learning. Reinforcement learning, combined with expert systems, was used, for example, for AlphaGo.

For instance, the media broke the story about artificial intelligence defeating the best player in the game called Go. AI Watson from IBM was also highly publicised. These forms of artificial intelligence combine machine learning and expert systems. Their use is limited, but they have an excellent understanding of defined boundaries. However, that is all. The bottom line is that we have AI that can defeat someone in games but that cannot make decisions in other spheres.

Watson, for instance, is good at putting things into context. The paradox, though, is that it cannot decide based on what it has discovered. So, it is not a conscious or purposeful activity. Watson is great when diagnosing an MRI. The analysis implies that it has a higher rate of success than most radiologists. This is understandable, as a radiologists effectiveness is derived from their experience, from how many X-rays they have already seen. This is, essentially, machine learning.

Moreover, radiologists decision-making is often impacted by human weaknesses like fatigue, current mood, or whether the person is hungry or thirsty. Watson is not affected by such factors. It is enough to pour thousands or millions of X-ray images classified as good, bad, or whatever into the AI. Based on these images, Watson is able to predict diagnosis with a very high success rate. It has the capacity to see more images than a single radiologist can see in their lifetime. Moreover, it can even recognise a change in a single pixel, which is close to impossible for humans.

ML is popular because it is apt for a wide scale of tasks we face in everyday life. For instance, it can make a prediction based on previous data and look for anomalies, and it has computer vision, which has been functioning for many years. So, ML is not popular because it is the best form of AGI.

Many wise people even consider ML a deadlock. Returning to the example with the cup I mentioned: if you want to teach a child what a cup is, you do not show them a million cups. The human brain does not work like this.

This is a good question. There are two ways to view this question. What does an insufficient computing performance mean? Some researchers claim that if we wanted to translate the human brain into a computer, we could model it into the currently best-performing computer globally. One human brain into the best-performing and most expensive super-computer - that does not seem very effective.

This is another problem. An approximate estimate can be made. The performance of modern computers is calculated in units called FLOPS (Floating-point Operations Per Second). Roughly said, this is the number of operations a computer can do in a decimal point of a second. The best-performing super-computers have calculating performance in tens of peta-flops, or something crazy like that. In other words, these super computers can calculate simulations of the atmosphere and nuclear explosions with complex equations with a billion parameters. The human brain would do such calculations with only the estimated speed of 0.001 FLOPS. But the human brain can do other things todays computers would not be able to simulate, as they are too complicated.

This is not the focal point; it is centred more on consciousness or deciding. These are things we do not understand properly. We do not know how consciousness works, but we do know that it does not work in the way ML does. Nobody looks at 500,000 photos of cats to recognise a cat on the street. That is why we cannot speak about AI yet; there are many challenges, and we are just talking about simulating a single average human brain.

We have just touched on it: we do not know how consciousness works. Nobody knows what consciousness is. There are philosophical definitions but we lack a mathematical model we would be able to use. We have no clear definition.

This is another question in this debate. If we want to achieve full-fledged AI, it has to have consciousness. Without it, it will be mere list of rules which the machine follows, but its decision-making is not independent. For the AI to be independent, it has to have its own ability to think, just like a human. And, of course, we humans make mistakes as well, so if AI is designed by us, it will probably not have perfect thinking and will probably make mistakes, just like us. And if it did not make them, we would already talk about super intelligence.

If we talk about creating AI like human intelligence, then it should have all these requirements, like a personality, and should make mistakes and learn lessons from them.

Exactly. Each form of artificial intelligence develops in a certain way. One has grown up in one laboratory, another in a different one. This development is distinctive, and different AI learn on different inputs, offering a different kind of evaluation, just like humans. Otherwise, we would be talking about super intelligence, which always gives perfect outputs.

I do not think so. There is a group of people who dream about it, but we also know of quite a big group that does not agree with it. This group includes many respected people, like the late Stephen Hawking, Elon Musk, and Bill Gates, who do not think we should not take this path.

I see it pragmatically. Why should humans do monotonous, boring things, if a computer can do it better? I would rather my MRI be evaluated by a really good computer with a high success rate than an average or below-average radiologist.

Thus, the key is what AI are we discussing? Do we just want intelligent help from a computer, artificial general intelligence, or super intelligence? From my point of view, now the main goal is to create AI that helps us with problems we cannot solve, or solves them much more effectively. We have not made any further progress on this.

Yes, it is possible. The question rather is do we want it at all? As I already indicated, I see it from a practical point of view. Elements of AI can help us greatly in everyday life, and that is exactly what we here, in ESET, are working on. Why not use it?

Juraj Jnok received a Bachelor's degree in applied informatics and a Masters degree in robotics at the Slovak University of Technology in Bratislava. In 2008, he joined the ESET company as an analyst of the malicious code. Since 2013, he has been leading the team responsible for automatic detection of threats and artificial intelligence. He is currently responsible for integrating machine learning into the detection kernel. He regularly lectures at specialist conferences around the world.

28. Oct 2019 at 6:00

See the rest here:

Artificial intelligence expert: True artificial intelligence should also have a consciousness, but we are far from that - The Slovak Spectator

Related Posts