How long have we got before humans are replaced by artificial intelligence? – Scroll.in

My view, and that of the majority of my colleagues in AI, is that itll be at least half a century before we see computers matching humans. Given that various breakthroughs are needed, and its very hard to predict when breakthroughs will happen, it might even be a century or more. If thats the case, you dont need to lose too much sleep tonight.

One reason for believing that machines will get to human-level or even superhuman-level intelligence quickly is the dangerously seductive idea of the technological singularity. This idea can be traced back to a number of people over fifty years ago: John von Neumann, one of the fathers of computing, and the mathematician and Bletchley Park cryptographer IJ Good. More recently, its an idea that has been popularised by the science-fiction author Vernor Vinge and the futurist Ray Kurzweil.

The singularity is the anticipated point in humankinds history when we have developed a machine so intelligent that it can recursively redesign itself to be even more intelligent. The idea is that this would be a tipping point, and machine intelligence would suddenly start to improve exponentially, quickly exceeding human intelligence by orders of magnitude.

Once we reach the technological singularity, we will no longer be the most intelligent species on the planet. It will certainly be an interesting moment in our history. One fear is that it will happen so quickly that we wont have time to monitor and control the development of this super-intelligence, and that this super-intelligence might lead intentionally or unintentionally to the end of the human race.

Proponents of the technological singularity who, tellingly, are usually not AI researchers but futurists or philosophers behave as if the singularity is inevitable. To them, it is a logical certainty; the only question mark is when. However, like many other AI researchers, I have considerable doubt about its inevitability.

We have learned, over half a century of work, how difficult it is to build computer systems with even modest intelligence. And we have never built a single computer system that can recursively self-improve. Indeed, even the most intelligent system we know of on the planet the human brain has made only modest improvements in its cognitive abilities. It is, for example, still as painfully slow today for most of us to learn a second language as it always was. Little of our understanding of the human brain has made the task easier.

Since 1930, there has been a significant and gradual increase in intelligence test scores in many parts of the world. This is called the Flynn effect, after the New Zealand researcher James Flynn, who has done much to identify the phenomenon. However, explanations for this have tended to focus on improvements in nutrition, healthcare and access to school, rather than on how we educate our young people.

There are multiple technical reasons why the technological singularity might never happen. I discussed many of these in my last book. Nevertheless, the meme that the singularity is inevitable doesnt seem to be getting any less popular. Given the importance of the topic it may decide the fate of the human race I will return again to these arguments, in greater detail, and in light of recent developments in the debates. I will also introduce some new arguments against the inevitability of the technological singularity.

My first objection to the supposed inevitability of the singularity is an idea that has been called the faster-thinking dog argument. It considers the consequences of being able to think faster. While computer speeds may have plateaued, computers nonetheless still process data faster and faster. They achieve this by exploiting more and more parallelism, doing multiple tasks at the same time, a little like the brain.

Theres an expectation that by being able to think longer and harder about problems, machines will eventually become smarter than us. And we certainly have benefited from ever-increasing computer power; the smartphone in your pocket is evidence of that. But processing speed alone probably wont get us to the singularity.

Suppose that you could increase the speed of the brain of your dog. Such a faster-thinking dog would still not be able to talk to you, play chess or compose a sonnet. For one thing, it doesnt possess complex language. A faster-thinking dog will likely still be a dog. It will still dream of chasing squirrels and sticks. It may think these thoughts more quickly, but they will likely not be much deeper. Similarly, faster computers alone will not yield higher intelligence.

Intelligence is a product of many things. It takes us years of experience to train our intuitions. And during those years of learning we also refine our ability to abstract: to take ideas from old situations and apply them to new, novel situations. We add to our common sense knowledge, which helps us adapt to new circumstances. Our intelligence is thus much more than thinking faster about a problem.

My second argument against the inevitability of the technological singularity is anthropocentricity. Proponents of the singularity place a special importance on human intelligence. Surpassing human intelligence, they argue, is a tipping point. Computers will then recursively be able to redesign and improve themselves. But why is human intelligence such a special point to pass?

Human intelligence cannot be measured on some single, linear scale. And even if it could be, human intelligence would not be a single point, but a spectrum of different intelligences. In a room full of people, some people are smarter than others. So what metric of human intelligence are computers supposed to pass? That of the smartest person in the room? The smartest person on the planet today? The smartest person who ever lived? The smartest person who might ever live in the future? The idea of passing human intelligence is already starting to sound a bit shaky.

But lets put these objections aside for a second. Why is human intelligence, whatever it is, the tipping point to pass, after which machine intelligence will inevitably snowball? The assumption appears to be that if we are smart enough to build a machine smarter than us, then this smarter machine must also be smart enough to build an even smarter machine. And so on. But there is no logical reason that this would be the case. We might be able to build a smarter machine than ourselves. But that smarter machine might not necessarily be able to improve on itself.

There could be some level of intelligence that is a tipping point. But it could be any level of intelligence. It seems unlikely that the tipping point is less than human intelligence. If it were less than human intelligence, we humans could likely simulate such a machine today, use this simulation to build a smarter machine, and thereby already start the process of recursive self-improvement.

So it seems that any tipping point is at, or above, the level of human intelligence. Indeed, it could be well above human intelligence. But if we need to build machines with much greater intelligence than our own, this throws up the possibility that we might not be smart enough to build such machines.

My third argument against the inevitability of the technological singularity concerns meta-intelligence. Intelligence, as I said before, encompasses many different abilities. It includes the ability both to perceive the world and to reason about that perceived world. But it also includes many other abilities, such as creativity.

The argument for the inevitability of the singularity confuses two different abilities. It conflates the ability to do a task and the ability to improve your ability to do a task. We can build intelligent machines that improve their ability to do particular tasks, and do these tasks better than humans. Baidu, for instance, has built Deep Speech 2, a machine-learning algorithm that learned to transcribe Mandarin better than humans.

But Deep Speech 2 has not improved our ability to learn tasks. It takes Deep Speech 2 just as long now to learn to transcribe Mandarin as it always has. Its superhuman ability to transcribe Mandarin hasnt fed back into improvements of the basic deep-learning algorithm itself. Unlike humans, who get to be better learners as they learn new tasks, Deep Speech 2 doesnt learn faster as it learns more.

Improvements to deep-learning algorithms have come about the old-fashioned way: by humans thinking long and hard about the problem. We have not yet built any self-improving machines. Its not certain that we ever will.

Excerpted with permission from 2062: The World That AI Made, Toby Walsh, Speaking Tiger Books.

See the original post here:

How long have we got before humans are replaced by artificial intelligence? - Scroll.in

Related Posts

Comments are closed.