Superintelligence by Nick Bostrom and A Rough Ride to the …

Posted: September 6, 2016 at 8:20 am

Roboy, a humanoid robot developed by the University of Zurich's Artificial Intelligence Lab. Photograph: Erik Tham/Corbis

The Culture novels of Iain M Banks describe a future in which Minds superintelligent machines dwelling ingiant spacecraft are largely benevolent towards human beings and seem to take pleasure from our creativity and occasional unpredictability. It's a vision that I find appealing compared with many other imagined worlds. I'd like to think that if superintelligent beings did exist they would be at least as enlightened as, say, the theologian Thomas Berry, who wrote that once we begin to celebrate the joys of the Earth all things become possible. But the smart money or rather most of the money points another way. Box-office success goes to tales in which intelligences created by humans rise up and destroy or enslave their makers.

If you think this is all science fictionand fantasy, you may be wrong. Scientists including Stephen Hawking and Max Tegmark believe that superintelligent machines are quite feasible. And the consequences of creating them, they say, could be either the bestor the worst thing ever to happen to humanity. Suppose, then, we take the proposition seriously. When couldit happen and what could theconsequences be? Both Nick Bostromand James Lovelock address these questions.

The authors are very different. Bostrom is a 41-year-old academic philosopher; Lovelock, now 94, is a theorist and a prolific inventor (his electron capture detector was key to the discovery of the stratospheric ozone hole). They are alike in that neither is afraid to develop and champion heterodox ideas. Lovelock is famous for the Gaia hypothesis, which holds that life on Earth, taken as a whole, creates conditions that favour its own long-term flourishing. Bostrom has advanced radical ideas on transhumanism and even argued that it is more than likely we live inside acomputer-generated virtual world.

As early as the 1940s Alan Turing, John von Neumann and others saw that machines could one day have almost unlimited impact on humanity and the rest of life. Turing suggested programs that mimicked evolutionary processes could result in machines with intelligence comparable to or greater than that of humans. Certainly, achievements in computer science over the last 75 yearshave been astonishing. Most obviously, machines can now execute complex mathematical operations many orders of magnitude faster than humans. They can perform a range of tasks, from playing world-beating chess to flying a plane or a car, and their capabilities are rapidly growing. The consequences from machines stealing your job to eliminating drudgery to unravelling the enigmas of cancer toremote killing are and will continue to be striking.

But even the most sophisticated machines created so far are intelligent in only a limited sense. They enactcapabilities that humans have envisaged and programmed into them. Creativity, the ability to generate new knowledge and generalised intelligence outside specific domains seem to be beyond them. Expectations that AI would soon overtake human intelligence were first dashed in the 1960s. And the notion of a singularity the idea, advanced first by Vernor Vinge and championed most conspicuously by Ray Kurzweil, that the sudden, rapid explosion of AI and human biological enhancement is imminent and will probably with us by around 2030 looks to be heading for a similar fate.

Still, one would be ill-advised to dismiss the possibility altogether. (It took 100 years after George Cayley first understood the basic principles of aerodynamics to achieve heavier-than-air flight, and the first aeroplanes looked nothing like birds.) Bostrom reports that many leading researchers in AI place a 90% probability on the development of human-level machine intelligence by between 2075 and 2090. It is likely, he says, that superintelligence, vastly outstripping ours, would follow. The central argument of his book goes like this: the first superintelligence to be created will have decisive first-mover advantage and, in a world where there is no other system remotely comparable, it will be very powerful. Such a system will shape the world according to its "preferences", and will probably be able to overcome any resistance that humans can put up. The bad news is that the preferences such an artificial agent could have will, if fully realised, involve the complete destruction of human life and most plausible human values. The default outcome, then, is catastrophe. In addition, Bostrom argues that we are not out of the woods even if his initial premise is false and a unipolar superintelligence never appears. "Before the prospect of an intelligence explosion," he writes, "we humans are like small children playing with a bomb."

It will, he says, be very difficult but perhaps not impossible to engineer a superintelligence with preferences that make it friendly to humans or able to be controlled. Our saving grace could involve "indirect normativity" and "coherent extrapolated volition", in which we take advantage of an artificialsystem's own intelligence to deliver beneficial outcomes that we ourselves cannot see or agree on in advance. The challenge we face, he stresses, is "to hold on to our humanity: to maintain our groundedness". He recommends research be guided and managed within a strict ethical framework. Afterall, we are likely to need the smartest technology we can get our hands on to deal with the challenges we face in the nearer term. It comes, then, to a balance of risks. Bostrom's Oxford University colleagues Anders Sandberg and Andrew Snyder-Beattie suggest that nuclear war and the weaponisation ofbiotechnology and nanotechnology present greater threats to humanity than superintelligence.

For them, manmade climate change is not an existential threat. This judgment is shared by Lovelock, who argues that while climate change could mean a bumpy ride over the next century or two, with billions dead, it isnot necessarily the end of the world.

What distinguishes Lovelock's new book from his earlier ones is an emphasis on the possibility of humanity as part of the solution as well as part of the problem. "We are crucially important for the survival of life on Earth," hewrites. "If we trash civilisation by heedless auto-intoxication, global war or the wasteful dispersal of the Earth's chemical resources, it will grow progressively more difficult to begin again and reach the present level of knowledge. If we fail, or become extinct, there is probably not sufficient time for a successor animal to evolve intelligence at or above our level." Earth now needs humans equipped with the bestof modern science, he believes, to ensure that life will continue to thrive. Only we can produce new forms clever enough to flourish millions of years in the future when the sun gets hotter and larger and begins to make carbon-based life less viable. Lovelock thinks superintelligent machines are a distant prospect, and that technology will remain our slave.

What to believe and to predict? Perhaps better to quote. In his 1973 television series and book The Ascent of Man, Jacob Bronowski said: "We are nature's unique experiment to make the rational intelligence prove itself sounder than reflex. Knowledge is our destiny." To this add a few words of Sandberg's: "The core problem is overconfidence The greatest threat is human stupidity."

To order these titles with free UK p&p call Guardian book service on 0330 333 6846 or go to guardianbookshop.co.uk.

View post:

Superintelligence by Nick Bostrom and A Rough Ride to the ...

Related Posts