Artificial intelligence has a long way to go before computers are as intelligent as humans. But progress is happening rapidly, in everything from logical reasoning to facial and speech recognition. With steady improvements in memory, processing power, and programming, the question isn't if a computer will ever be as smart as a human, but only how long it will take. And once computers are as smart as people, they'll keep getting smarter, in short order become much, much smarter than people. When artificial intelligence (AI) becomes artificial superintelligence (ASI), the real problems begin.
In his new book Our Final Invention: Artificial Intelligence and the End of the Human Era, James Barrat argues that we need to begin thinking now about how artificial intelligences will treat their creators when they can think faster, reason better, and understand more than any human. These questions were long the province of thrilling (if not always realistic) science fiction, but Barrat warns that the consequences could indeed be catastrophic. I spoke with him about his book, the dangers of ASI, and whether we're all doomed.
Your basic thesis is that even if we don't know exactly how long it will take, eventually artificial intelligence will surpass human intelligence, and once they're smarter than we are, we are in serious trouble. This is an idea people are familiar with; there are lots of sci-fi stories about homicidal AIs like HAL or Skynet. But you argue that it may be more likely that super-intelligent AI will be simply indifferent to the fate of humanity, and that could be just as dangerous for us. Can you explain?
First, I think we've been inoculated to the threat of advanced AI by science fiction. We've had so much fun with Hollywood tropes like Terminator and of course the Hal 9000 that we don't take the threat seriously. But as Bill Joy once said, "Just because you saw it in a movie doesn't mean it can't happen."
Superintelligence in no way implies benevolence. Your laptop doesn't like you or dislike you anymore than your toaster does why do we believe an intelligent machine will be different? We humans have a bad habit of imputing motive to objects and phenomenaanthropomorphizing. If it's thundering outside the gods must be angry. We see friendly faces in clouds. We anticipate that because we create an artifact, like an intelligent machine, it will be grateful for its existence, and want to serve and protect us.
But these are our qualities, not machines'. Furthermore, at an advanced level, as I write in Our Final Invention, citing the work of AI-maker and theorist Steve Omohundro, artificial intelligence will have drives much like our own, including self-protection and resource acquisition. It will want to achieve its goals and marshal sufficient resources to do so. It will want to avoid being turned off. When its goals collide with ours it will have no basis for valuing our goals, and use whatever means are at its disposal for achieving its goals.
The immediate answer many people would give to the threat is, "Well, just program them not to hurt us," with some kind of updated version of Isaac Asimov's Three Laws of Robotics. I'm guessing that's no easy task.
That's right, it's extremely difficult. Asimov's Three Laws are often cited as a cure-all for controlling ASI. In fact they were created to generate tension and stories. HIs classic I, Robot is a catalogue of unintended consequences caused by conflicts among the three laws. Not only are our values hard to give to a machine, our values change from culture to culture, religion to religion, and over time. We can't agree on when life begins, so how can we reach a consensus about the qualities of life we want to protect? And will those values make sense in 100 years?
When you're discussing our efforts to contain an AI many times smarter than us, you make an analogy to waking up in a prison run by mice (with whom you can communicate). My takeaway from that was pretty depressing. Of course you'd be able to manipulate the mice into letting you go free, and it would probably be just as easy for an artificial superintelligence to get us to do what it wants. Does that mean any kind of technological means of containing it will inevitably fail?
Our Final Invention is both a warning and a call for ideas about how to govern superintelligence. I think we'll struggle mortally with this problem, and there aren't a lot of solutions out thereI've been looking. Ray Kurzweil, who's portrait of the future is very rosy, concedes that superior intelligence won't be contained. His solution is to merge with it. The 1975 Asilomar Conference on Recombinant DNA is a good model of what should happen. Researchers suspended work and got together to establish basic safety protocols, like "don't track the DNA out on your shoes." It worked, and now we're benefitting from gene therapy and better crops, with no horrendous accidents so far. MIRI (the Machine Intelligence Research Institute) advocates creating the first superintelligence with friendliness encoded, among other steps, but that's hard to do. Bottom linebefore we share the planet with superintelligent machines we need a science for understanding and controlling them.
But as you point out, it would be extremely difficult in practical terms to ban a particular kind of AIif we don't build it, someone else will, and there will always be what seem to them like very good reasons to do so. With people all over the world working on these technologies, how can we impose any kind of stricture that will prevent the outcomes we're afraid of?
Human-level intelligence at the price of a computer will be the most lucrative commodity in the history of the world. Imagine banks of thousands of PhD quality brains working on cancer research, climate modeling, weapons development. With those enticements, how do you get competing researchers and countries to the table to discuss safety? My answer is to write a book, make films, get people aware and involved, and start a private-public partnership targeted at safety. Government and industry have to get together. For that to happen, we must give people the resources they need to understand a problem that's going to deeply affect their lives. Public pressure is all we've got to get people to the table. If we wait to be motivated by horrendous accidents and weaponization, as we have with nuclear fission, then we'll have waited too long.
Beyond the threat of annihilation, one of the most disturbing parts of this vision is the idea that we'll eventually reach the point at which humans are no longer the most important actors on planet Earth. There's another species (if you will) with more capability and power to make the big decisions, and we're here at their indulgence, even if for the moment they're treating us humanely. If we're a secondary species, how do you think that will affect how we think about what it means to be human?
That's right, we humans steer the future not because we're the fastest or strongest creatures, but because we're the smartest. When we share the planet with creatures smarter than we are, they'll steer the future. For a simile, look at how we treat intelligent animals - they're at Seaworld, they're bushmeat, they're in zoos, or they're endangered. Of course the Singularitarians believe that the superintelligence will be ourswe'll be transhuman. I'm deeply skeptical of that one-sided good news story.
As you were writing this book, were there times you thought, "That's it. We're doomed. Nothing can be done"?
Yes, and I thought it was curious to be alive and aware within the time window in which we might be able to change that future, a twist on the anthropic principal. But having hope about seemingly hopeless odds is a moral choice. Perhaps we'll get wise to the dangers in time. Perhaps we'll learn after a survivable accident. Perhaps enough people will realize that advanced AI is a dual use technology, like nuclear fission. The world was introduced to fission at Hiroshima. Then we as a species spent the next 50 years with a gun pointed at our own heads. We can't survive that abrupt an introduction to superintelligence. And we need a better maintenance plan than fission's mutually assured destruction.
See the original post here:
When Robots Take Over, What Happens to Us?
- Neurodiversity vs. Cognitive Liberty [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Link dump: 2009.10.13 [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Limits to the biolibertarian impulse [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Link dump: 2009.10.15 [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Neurodiversity vs. Cognitive Liberty, Round II [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Link dump: 2009.10.17 [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Cognitive liberty and right to one's mind [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- TED Talks: Henry Markram builds a brain in a supercomputer [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- And Now, for Something Completely Different: Doomsday! [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Link dump: 2009.10.19 [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Oklahoma and abortion - some fittingly harsh reflections [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Pigliucci on science and the scope of skeptical inquiry [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Remembering Mac Tonnies [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Link dump: 2009.10.24 [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Link dump: 2009.10.26 [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- The Bright Side of Nuclear Armament [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Grieving chimps [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Elephant prosthetic [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Mass produced artificial skin to replace animal testing [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Dog gets osseointegrated prosthetic [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- NASA Shuttle-derived Sidemount Heavy Launch Vehicle Concept [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Link dump for 2009.02.02 [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Link dump for 2009.11.04 [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Link dump for 2009.11.05 [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- IEET's Biopolitics of Popular Culture Seminar [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Einstein and Millikan should have done a Kurzweil [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Affective Death Spirals [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Cure aging or give a small number of disabled people jobs as janitors? [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Would unary notation prevent scope insensitivity? [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Cure aging or give a small number of disabled people jobs as janitors - unary version. [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- At SENS4, Cambridge, UK [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- SENS4 overview and review - how evolution complicates SENS, and why we must try harder [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- SENS4 top 10 photos [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- My AI research for this year [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- My AI research: Formal Logic [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- My AI research: Category theory and institution theory [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- My AI research: The Semantic Web [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- My AI research: Features and Flaws of Logical representation [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- My AI research: Graphical models and probabilistic logics [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Hughes and More engage Italian Catholicism: Image caption competition [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Surprisingly good solutions, falling in love and life in a materialistic universe [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- What do you get when you cross slightly evolved, status seeking monkeys with the scientific method? [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Seeking the optimal philanthropic strategy: Global Warming or AI risk? [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Machine Learning - harbinger of the future of AI? [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- At the Singularity Summit in NYC [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Katja Grace: world-dominating superintelligence is "unlikely" [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Normal Human Heroes on "Nightmare futures" [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Anissimov on Intelligence Enhancement [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Yudkowsky on "Value is fragile" [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Response to Pearce [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Creative thinking lets you believe whatever you want [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- Let’s get metaphysical: How our ongoing existence could appear increasingly absurd [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- Linda MacDonald Glenn guest blogging in November and December [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- Link dump for 2009.11.15 [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- Call 1-800-New-Organ, by 2020? [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- IBM's claim to have simulated a cat's brain grossly overstated [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- John Hodgman pulls off Fermi Paradox schtick [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- Deus Sex Machina [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- How Americans spent themselves into ruin... but saved the world [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- I am my own grandpa (or grandma)? [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- Link dump for 2009.11.29 [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- The art of Tomas Saraceno [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- Link dump: 2009.12.05 [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- The Harmonic Convergence of Science, Sight, & Sound [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- Working on my website [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- Transhumanism, personal immortality and the prospect of technologically enabled utopia [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- RokoMijic.com is up [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- Why the Fuss About Intelligence? [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- Initiation ceremony [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- Birthing Gods [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- 11 core rationalist skills - from LessWrong [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- The best of the guests [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- The best of Sentient Developments: 2009 [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- Link dump: 2009.12.15 [Last Updated On: December 15th, 2009] [Originally Added On: December 15th, 2009]
- The Utopia Force [Last Updated On: December 22nd, 2009] [Originally Added On: December 22nd, 2009]
- Avatar: The good, the bad and ugly [Last Updated On: December 23rd, 2009] [Originally Added On: December 23rd, 2009]
- Singularity Institute launches "2010 Singularity Research Challenge" [Last Updated On: December 24th, 2009] [Originally Added On: December 24th, 2009]
- Transhumanism as a "nonissue" [Last Updated On: December 24th, 2009] [Originally Added On: December 24th, 2009]
- Hanson on "Meh, Transhumanism" [Last Updated On: December 25th, 2009] [Originally Added On: December 25th, 2009]
- Merry Newtonmas from Transhuman Goodness [Last Updated On: December 25th, 2009] [Originally Added On: December 25th, 2009]