Summoning the Demon: Why superintelligence is humanity’s … – GeekWire

Posted: May 26, 2017 at 4:20 am

[Editors Note:This guest commentaryis byRichard A. Clarke and R.P. Eddy, authors of the new book, Warnings: Finding Cassandras To Stop Catastrophes.]

Artificial intelligence is a broad term, maybe overly broad. It simply means a computer program that can perform tasks that would otherwise require human action. Such tasks include decision making, language translation, and data analysis. When most people think of AI, they are really thinking of what computer scientists call weak artificial intelligence the type of AI that runs everyday devices like computers, smartphones, even cars. It is any computer program that can analyze various inputs, then select and execute from a set of preprogrammed responses. Today, weak AI performs simple (or narrow) tasks: commanding robots to stack boxes, trading stocks autonomously, calibrating car engines, or running smartphones voice-command interfaces.

Machine learning is a type of computer programming that helps make AI possible. Machine-learning programs have the ability to learn without being explicitly programmed, optimizing themselves to most efficiently meet a set of pre-established goals. Machine learning is still in its infancy, but as it matures, its capacity for self-improvement sets AI apart from any other invention in history.

The compounding effect of computers teaching themselvesleads us to superintelligence. Superintelligence is an artificial intelligence that will be smarter than its human creators. Superintelligence does not yet exist, but when it does, some believe it could solve many of humanitys greatest challenges: aging, energy, and food shortages, even perhaps climate change. Self-perpetuating and untiring, this advanced AI would continue improving at a remarkably fast rate and eventually surpass the level of complexity humans can understand. While this promises great potential, it is not without its dangers.

As the excitement for superintelligence grows, so too does concern. The astrophysicist and Nobel laureate Dr. Stephen Hawking warns that AI is likely to be either the best or worst thing ever to happen to humanity, so theres huge value in getting it right. Hawking is not alone in his concern about superintelligence. Icons of the tech revolution, including former Microsoft chairman Bill Gates, Amazon founder Jeff Bezos, and Tesla and SpaceX CEO Elon Musk, echo his concern. And it terrifies Eliezer Yudkowsky.

A divisive figure, Yudkowsky is well-known in academic circles and the Silicon Valley scene as the coiner of the term friendly AI. His thesis is simple, though his solution is not: if we are to have any hope against superintelligence, we need to code it properly from the beginning. The answer, Eliezer believes, is one of morality. AI must be programmed with a set of ethical codes that align with humanitys. Though it is his lifes only work, Yudkowsky is pretty sure he will fail. Humanity, he says, is likely doomed.

Humanity has a long history of ignoring seers carrying accurate messages of our doom. You may not remember Cassandra, the tragic figure in Greek mythology for whom this phenomenon is named, but you will likely recall the 1986 Space Shuttle Challenger disaster. That explosion, and the resultant deaths of the seven astronauts, was specifically presaged in warnings by the selfsame scientists responsible for the o-ring technology that failed and caused the explosion. They were right, they warned, and they were ignored. Is Yudkowsky a modern-day Cassandra? Are there others?

Regardless of the warnings of Yudkowsky, Gates, Musk, Hawking, and others, humans will almost certainly pursue the creation of superintelligence relentlessly as it holds unimaginable promise to transform the world. If or when it is born, many believe it will rapidly become more and more capable, able to tackle and solve the most advanced and perplexing challenges scientists pursue, and even those that they cant yet. A superintelligent computer will recursively self-improve to as-of-yet uncomprehended levels of intelligence, although only time will tell whether this self-improvement will happen gradually or within the first second of being turned on. It will carve new paths in fields yet undiscovered, fueled by perpetual self-improvements to its own source code and the creation of new robotic tools.

Artificial intelligence has the potential to be dramatically more powerful than any previous scientific advancement. Superintelligence, according to Nick Bostrom at Oxford, is not just another technology, another tool that will add incrementally to human capabilities. It is, he says, radically different, and it may be the last invention humans ever need to make.

Yudkowsky and others concerned about super intelligence view the issue through a Darwinian lens. Once humans are no longer the most intelligent species on the planet, humankind will survive only at the whim of whatever is. He fears that such superintelligent software would exploit the Internet, seizing control of anything connected to it electrical infrastructure, telecommunications systems, manufacturing plants Its first order of business may be to covertly replicate itself on many other servers all over the globe as a measure of redundancy. It could build machines and robots, or even secretly influence the decisions of ordinary people in pursuit of its own goals. Humanity and its welfare may be of little interest to an entity so profoundly smart.

Elon Musk calls creating artificial intelligence summoning the demon and thinks its humanitys biggest existential threat. When we asked Eliezer what was at stake, his answer was simple: everything. Superintelligence gone wrong is a species-level threat, a human extinction event.

Humans are neither the fastest nor the strongest creatures on the planet but dominate for one reason: humans are the smartest. How might the balance of power shift if AI becomes superintelligence? Yudkowsky told us, By the time its starting to look like [an AI system] might be smarter than you, the stuff that is way smarter than you is not very far away. He believes this is crunch time for the whole human species, and not just for us but for the [future] intergalactic civilization whose existence depends on us. This is the hour before the final exam and were trying to get as much studying done as possible. The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.

Self-aware computers and killer robots are nothing new to the big screen, but some believe the intelligence explosion will be far worse than anything Hollywood has imagined. In a 2011 interview on NPR, AI programmer Keefe Roedersheimer discussed The Terminator and the follow-up series, which pits the superintelligent Skynet computer system against humanity. Below is a transcript of their conversation:

Mr. Roedersheimer:The Terminator [is an example of an] AI that could get out of control. But if you really think about it, its much worse than that.

NPR: Much worse than Terminator?

Mr. Roedersheimer: Much, much worse.

NPR: How could it possibly thats a moonscape with people hiding under burnt-out buildings and being shot by lasers. I mean, what could be worse than that?

Mr. Roedersheimer: All the people are dead.

NPR: In other words, forget the heroic human resistance. Thered be no time to organize one. Somebody presses enter, and were done.

Yudkowsky believes superintelligence must be designed from the start with something approximating ethics. He envisions this as a system of checks and balances so that its growth is auditable and controllable; so that even as it continues to learn, advance, and reprogram itself, it will not evolve out of its own benign coding. Such preprogrammed measures will ensure that superintelligence will behave as we intend even in the absence of immediate human supervision. Eliezer calls this friendly AI.

According to Yudkowsky, once AI gains the ability to broadly reprogram itself, it will be too late to implement safeguards, so society needs to prepare now for the intelligence explosion. Yet, this preparation is complicated by the sporadic and unpredictable nature of scientific advancement and the numerous covert efforts to create superintelligence around the world. No supranational organization can track all of the efforts, much less predict when or which one of them will succeed.

Eli and his supporters believe a wait and see approach (a form of satisficing) is a Kevorkian prescription. [The birth of superintelligence] could be five years out; it could be forty years out; it could be sixty years out, Yudkowsky told us. You dont know. I dont know. Nobody on the planet knows. And by the time you actually know, its going to be [too late] to do anything about it.

Richard A. Clarke, a veteran of thirty years in national security and over a decade in the White House, is now the CEO ofGood Harbor Security Risk Management and author, with R.P. Eddy, of Warnings: Finding Cassandras To Prevent Catastrophes. Clarke is an adviser to Seattle-based AI cybersecurity company Versive.

R.P. Eddy is the CEO of Ergo, one of the worlds leading intelligence firms. His multi-decade career in national security includes serving as Director at the White House National Security Council.

Link:

Summoning the Demon: Why superintelligence is humanity's ... - GeekWire

Related Posts