When Artificial Intelligence Gets Too Clever by Half – Undark Magazine

Posted: May 26, 2017 at 4:04 am

Picture a crew of engineers building a dam. Theres an anthill in the way, but the engineers dont care or even notice; they flood the area anyway, and too bad for the ants.

Just as we now have power to dictate the fate of less intelligent beings, so might such computers someday exert life-and-death power over us.

Now replace the ants with humans, happily going about their own business, and the engineers with a race of superintelligent computers that happen to have other priorities. Just as we now have power to dictate the fate of less intelligent beings, so might such computers someday exert life-and-death power over us.

Thats the analogy the superstar physicist Stephen Hawking used in 2015 to describe the mounting perils he sees in the current explosion of artificial intelligence. And lately the alarms have been sounding louder than ever. Allan Dafoe of Yale and Stuart Russell of Berkeley wrote an essay in MIT Technology Review titled Yes, We Are Worried About the Existential Risk of Artificial Intelligence. The computing giants Bill Gates and Elon Musk have issued similar warnings online.

Should we be worried?

Perhaps the most influential case that we should be was made by the Oxford philosopher Nick Bostrom, whose 2014 book, Superintelligence: Paths, Dangers, Strategies, was a New York Times best seller. The book catapulted the term superintelligence into popular consciousness and bestowed authority on an idea many had viewed as science fiction.

Bostrom defined superintelligence as any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest, with the hypothetical power to vastly outmaneuver us, just like Hawkings engineers.

And it could have very good reasons for doing so. In the title of his eighth chapter, Bostrom asks, Is the default outcome doom?, and he suggests that the unnerving answer might be yes. He points to a number of goals that superintelligent machines might adopt, including resource acquisition, self-preservation, and cognitive improvements, with potentially disastrous consequences for us and the planet.

Bostrom illustrates his point with a colorful thought experiment. Suppose we develop an AI tasked with building as many paper clips as possible. This paper clip maximizer might simply convert everything, humanity included, into paper clips. Ousting humans would also facilitate self-preservation, eliminating our unfortunate knack for switching off machines. Theres also the possibility of an intelligence explosion, where even a modestly capable general AI might undergo a rapid period of self-improvement in order to better achieve its goals, swiftly bypassing humanity in the process.

Many critics are skeptical of this line of argument, seeing a fundamental disconnect between the kinds of AI that might result in an intelligence explosion and the state of the field today. Contemporary AI, they note, is effective only at specific tasks, like driving and winning at Jeopardy!

Oren Etzioni, the CEO of the Allen Institute for Artificial Intelligence, writes that many researchers place superintelligence beyond the foreseeable horizon, and the philosopher Luciano Floridi argues in Aeon that we should not lose sleep over the possible appearance of some ultraintelligence we have no idea how we might begin to engineer it. Roboticist Rodney Brooks sums up these critiques well, likening fears over superintelligence today to seeing more efficient internal combustion engines appearing and jumping to the conclusion that warp drives are just around the corner.

To these and other critics, superintelligence is not just a waste of time but, in Floridis words, irresponsibly distracting, diverting attention from more pressing problems. One such problem is inequality: AI software used to assess the risks of recidivism, for example, shows clear racial bias, being twice as likely to flag black individuals incorrectly. Women searching Google are less likely than men to be shown ads for high-paying jobs. Add to this a host of emerging issues, including driverless cars, autonomous weapons, and the automation of jobs, and it is clear there are many areas needing immediate attention.

To the Microsoft researcher Kate Crawford, the hand-wringing over superintelligence is symptomatic of AIs white guy problem, an endemic lack of diversity in the field. Writing in The New York Times, she opines that while the rise of an artificially intelligent apex predator may be the biggest risk for the affluent white men who dominate public discourse on AI, for those who already face marginalization or bias, the threats are here.

But these arguments, however valid, do not go to the heart of what Bostrom and like-minded thinkers are worried about. Critics who emphasize the low probability of an intelligence explosion neglect a core component of Bostroms thesis. In the preface of Superintelligence, he writes that it is no part of the argument in this book that we are on the threshold of a big breakthrough in artificial intelligence, or that we can predict with any precision when such a development might occur. Instead, his argument hinges on the logical possibility of an intelligence explosion something few deny and the need to consider the problem in advance, given the consequences.

That superintelligence might distract us from addressing existing problems is a legitimate concern, but aside from an (admittedly successful) appeal to intuition, no evidence is actually offered in support of this claim.

There is room to consider both long-term and short-term consequences of AI.

Its more likely Bostrom and company have had the opposite impact, with the problems of contemporary AI benefiting from increased political, media, and public attention, as well as the accompanying injection of funds into the field. A case in point is the new Leverhulme Center for the Future of Intelligence. Based at the University of Cambridge, the center was founded with $13 million secured largely through the work of its sister organization, the Center for the Study of Existential Risk, known for its work on advanced AI risks.

This is not an either/or debate, nor do we need to neglect existing problems in order to pay attention to the risks of superintelligence. It is important not to allow concerns for short-term exigencies to overwhelm concern for the future (and vice versa) something at which humanity has a very poor track record. There is room to consider both long-term and short-term consequences of AI, and given the enormous opportunities and risks it is imperative we do so.

Robert Hart is a researcher and writer on the politics of science and technology, with special interests in biotechnology, animal behavior, and artificial intelligence. He can be reached on Twitter @Rob_Hart17.

See the original post:

When Artificial Intelligence Gets Too Clever by Half - Undark Magazine

Related Posts