[Editors Note:This guest commentaryis byRichard A. Clarke and R.P. Eddy, authors of the new book, Warnings: Finding Cassandras To Stop Catastrophes.]
Artificial intelligence is a broad term, maybe overly broad. It simply means a computer program that can perform tasks that would otherwise require human action. Such tasks include decision making, language translation, and data analysis. When most people think of AI, they are really thinking of what computer scientists call weak artificial intelligence the type of AI that runs everyday devices like computers, smartphones, even cars. It is any computer program that can analyze various inputs, then select and execute from a set of preprogrammed responses. Today, weak AI performs simple (or narrow) tasks: commanding robots to stack boxes, trading stocks autonomously, calibrating car engines, or running smartphones voice-command interfaces.
Machine learning is a type of computer programming that helps make AI possible. Machine-learning programs have the ability to learn without being explicitly programmed, optimizing themselves to most efficiently meet a set of pre-established goals. Machine learning is still in its infancy, but as it matures, its capacity for self-improvement sets AI apart from any other invention in history.
The compounding effect of computers teaching themselvesleads us to superintelligence. Superintelligence is an artificial intelligence that will be smarter than its human creators. Superintelligence does not yet exist, but when it does, some believe it could solve many of humanitys greatest challenges: aging, energy, and food shortages, even perhaps climate change. Self-perpetuating and untiring, this advanced AI would continue improving at a remarkably fast rate and eventually surpass the level of complexity humans can understand. While this promises great potential, it is not without its dangers.
As the excitement for superintelligence grows, so too does concern. The astrophysicist and Nobel laureate Dr. Stephen Hawking warns that AI is likely to be either the best or worst thing ever to happen to humanity, so theres huge value in getting it right. Hawking is not alone in his concern about superintelligence. Icons of the tech revolution, including former Microsoft chairman Bill Gates, Amazon founder Jeff Bezos, and Tesla and SpaceX CEO Elon Musk, echo his concern. And it terrifies Eliezer Yudkowsky.
A divisive figure, Yudkowsky is well-known in academic circles and the Silicon Valley scene as the coiner of the term friendly AI. His thesis is simple, though his solution is not: if we are to have any hope against superintelligence, we need to code it properly from the beginning. The answer, Eliezer believes, is one of morality. AI must be programmed with a set of ethical codes that align with humanitys. Though it is his lifes only work, Yudkowsky is pretty sure he will fail. Humanity, he says, is likely doomed.
Humanity has a long history of ignoring seers carrying accurate messages of our doom. You may not remember Cassandra, the tragic figure in Greek mythology for whom this phenomenon is named, but you will likely recall the 1986 Space Shuttle Challenger disaster. That explosion, and the resultant deaths of the seven astronauts, was specifically presaged in warnings by the selfsame scientists responsible for the o-ring technology that failed and caused the explosion. They were right, they warned, and they were ignored. Is Yudkowsky a modern-day Cassandra? Are there others?
Regardless of the warnings of Yudkowsky, Gates, Musk, Hawking, and others, humans will almost certainly pursue the creation of superintelligence relentlessly as it holds unimaginable promise to transform the world. If or when it is born, many believe it will rapidly become more and more capable, able to tackle and solve the most advanced and perplexing challenges scientists pursue, and even those that they cant yet. A superintelligent computer will recursively self-improve to as-of-yet uncomprehended levels of intelligence, although only time will tell whether this self-improvement will happen gradually or within the first second of being turned on. It will carve new paths in fields yet undiscovered, fueled by perpetual self-improvements to its own source code and the creation of new robotic tools.
Artificial intelligence has the potential to be dramatically more powerful than any previous scientific advancement. Superintelligence, according to Nick Bostrom at Oxford, is not just another technology, another tool that will add incrementally to human capabilities. It is, he says, radically different, and it may be the last invention humans ever need to make.
Yudkowsky and others concerned about super intelligence view the issue through a Darwinian lens. Once humans are no longer the most intelligent species on the planet, humankind will survive only at the whim of whatever is. He fears that such superintelligent software would exploit the Internet, seizing control of anything connected to it electrical infrastructure, telecommunications systems, manufacturing plants Its first order of business may be to covertly replicate itself on many other servers all over the globe as a measure of redundancy. It could build machines and robots, or even secretly influence the decisions of ordinary people in pursuit of its own goals. Humanity and its welfare may be of little interest to an entity so profoundly smart.
Elon Musk calls creating artificial intelligence summoning the demon and thinks its humanitys biggest existential threat. When we asked Eliezer what was at stake, his answer was simple: everything. Superintelligence gone wrong is a species-level threat, a human extinction event.
Humans are neither the fastest nor the strongest creatures on the planet but dominate for one reason: humans are the smartest. How might the balance of power shift if AI becomes superintelligence? Yudkowsky told us, By the time its starting to look like [an AI system] might be smarter than you, the stuff that is way smarter than you is not very far away. He believes this is crunch time for the whole human species, and not just for us but for the [future] intergalactic civilization whose existence depends on us. This is the hour before the final exam and were trying to get as much studying done as possible. The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.
Self-aware computers and killer robots are nothing new to the big screen, but some believe the intelligence explosion will be far worse than anything Hollywood has imagined. In a 2011 interview on NPR, AI programmer Keefe Roedersheimer discussed The Terminator and the follow-up series, which pits the superintelligent Skynet computer system against humanity. Below is a transcript of their conversation:
Mr. Roedersheimer:The Terminator [is an example of an] AI that could get out of control. But if you really think about it, its much worse than that.
NPR: Much worse than Terminator?
Mr. Roedersheimer: Much, much worse.
NPR: How could it possibly thats a moonscape with people hiding under burnt-out buildings and being shot by lasers. I mean, what could be worse than that?
Mr. Roedersheimer: All the people are dead.
NPR: In other words, forget the heroic human resistance. Thered be no time to organize one. Somebody presses enter, and were done.
Yudkowsky believes superintelligence must be designed from the start with something approximating ethics. He envisions this as a system of checks and balances so that its growth is auditable and controllable; so that even as it continues to learn, advance, and reprogram itself, it will not evolve out of its own benign coding. Such preprogrammed measures will ensure that superintelligence will behave as we intend even in the absence of immediate human supervision. Eliezer calls this friendly AI.
According to Yudkowsky, once AI gains the ability to broadly reprogram itself, it will be too late to implement safeguards, so society needs to prepare now for the intelligence explosion. Yet, this preparation is complicated by the sporadic and unpredictable nature of scientific advancement and the numerous covert efforts to create superintelligence around the world. No supranational organization can track all of the efforts, much less predict when or which one of them will succeed.
Eli and his supporters believe a wait and see approach (a form of satisficing) is a Kevorkian prescription. [The birth of superintelligence] could be five years out; it could be forty years out; it could be sixty years out, Yudkowsky told us. You dont know. I dont know. Nobody on the planet knows. And by the time you actually know, its going to be [too late] to do anything about it.
Richard A. Clarke, a veteran of thirty years in national security and over a decade in the White House, is now the CEO ofGood Harbor Security Risk Management and author, with R.P. Eddy, of Warnings: Finding Cassandras To Prevent Catastrophes. Clarke is an adviser to Seattle-based AI cybersecurity company Versive.
R.P. Eddy is the CEO of Ergo, one of the worlds leading intelligence firms. His multi-decade career in national security includes serving as Director at the White House National Security Council.
Link:
Summoning the Demon: Why superintelligence is humanity's biggest threat - GeekWire
- Superintelligence: Paths, Dangers, Strategies - Wikipedia ... [Last Updated On: June 13th, 2016] [Originally Added On: June 13th, 2016]
- Top Ten Cybernetic Upgrades Everyone Will Want [Last Updated On: June 17th, 2016] [Originally Added On: June 17th, 2016]
- Ethical Issues In Advanced Artificial Intelligence [Last Updated On: June 17th, 2016] [Originally Added On: June 17th, 2016]
- How Long Before Superintelligence? - Nick Bostrom [Last Updated On: June 17th, 2016] [Originally Added On: June 17th, 2016]
- Superintelligence - Wikipedia, the free encyclopedia [Last Updated On: June 17th, 2016] [Originally Added On: June 17th, 2016]
- Nick Bostrom's Home Page [Last Updated On: June 19th, 2016] [Originally Added On: June 19th, 2016]
- Superintelligence Audiobook | Nick Bostrom | Audible.com [Last Updated On: June 19th, 2016] [Originally Added On: June 19th, 2016]
- Superintelligence Audiobook | Nick Bostrom | Audible.com [Last Updated On: June 21st, 2016] [Originally Added On: June 21st, 2016]
- Superintelligence: Paths, Dangers, Strategies | KurzweilAI [Last Updated On: June 21st, 2016] [Originally Added On: June 21st, 2016]
- Superintelligence [Last Updated On: June 21st, 2016] [Originally Added On: June 21st, 2016]
- Superintelligence: Paths, Dangers, Strategies by Nick ... [Last Updated On: June 21st, 2016] [Originally Added On: June 21st, 2016]
- Superintelligence: Paths, Dangers, Strategies by Nick Bostrom ... [Last Updated On: June 25th, 2016] [Originally Added On: June 25th, 2016]
- Superintelligence: Paths, Dangers, Strategies | KurzweilAI [Last Updated On: June 25th, 2016] [Originally Added On: June 25th, 2016]
- Parallel universes, the Matrix, and superintelligence ... [Last Updated On: June 28th, 2016] [Originally Added On: June 28th, 2016]
- Superintelligence - Nick Bostrom - Oxford University Press [Last Updated On: July 14th, 2016] [Originally Added On: July 14th, 2016]
- 'Superintelligence' enjoyable read | Community ... [Last Updated On: July 29th, 2016] [Originally Added On: July 29th, 2016]
- How Humanity Might Co-Exist with Artificial Superintelligence [Last Updated On: July 31st, 2016] [Originally Added On: July 31st, 2016]
- Future of AI 6. Discussion of 'Superintelligence: Paths ... [Last Updated On: August 10th, 2016] [Originally Added On: August 10th, 2016]
- Superintelligence by Nick Bostrom and A Rough Ride to the ... [Last Updated On: September 6th, 2016] [Originally Added On: September 6th, 2016]
- Superintelligence: paths, dangers, strategies | University ... [Last Updated On: October 17th, 2016] [Originally Added On: October 17th, 2016]
- Superintelligence: Paths, Dangers, Strategies: Amazon.co.uk ... [Last Updated On: October 27th, 2016] [Originally Added On: October 27th, 2016]
- Superintelligence | Guardian Bookshop [Last Updated On: October 27th, 2016] [Originally Added On: October 27th, 2016]
- The Artificial Intelligence Revolution: Part 2 - Wait But Why [Last Updated On: October 27th, 2016] [Originally Added On: October 27th, 2016]
- Superintelligence: Paths, Dangers, Strategies: Amazon.co ... [Last Updated On: November 17th, 2016] [Originally Added On: November 17th, 2016]
- Superintelligence: The Idea That Eats Smart People [Last Updated On: December 26th, 2016] [Originally Added On: December 26th, 2016]
- Will Machines Ever Outthink Us? - Huffington Post [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Elon Musk's Surprising Reason Why Everyone Will Be Equal in the ... - Big Think [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Experts have come up with 23 guidelines to avoid an AI apocalypse ... - ScienceAlert [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Stephen Hawking and Elon Musk Endorse 23 Asilomar Principles ... - Inverse [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- SoftBank's Fantastical Future Still Rooted in the Now - Wall Street Journal [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- The Moment When Humans Lose Control Of AI - Vocativ [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Game Theory: Google tests AIs to see whether they'll fight or work together - Neowin [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Simulation hypothesis: The smart person's guide - TechRepublic [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- Another Expert Joins Stephen Hawking and Elon Musk in Warning About the Dangers of AI - Futurism [Last Updated On: February 14th, 2017] [Originally Added On: February 14th, 2017]
- Artificial Intelligence Is Not a ThreatYet - Scientific American [Last Updated On: February 14th, 2017] [Originally Added On: February 14th, 2017]
- Elon Musk - 2 Things Humans Need to Do to Have a Good Future - Big Think [Last Updated On: February 26th, 2017] [Originally Added On: February 26th, 2017]
- Don't Fear Superintelligent AICCT News - CCT News [Last Updated On: February 26th, 2017] [Originally Added On: February 26th, 2017]
- Building A 'Collective Superintelligence' For Doctors And Patients Around The World - Forbes [Last Updated On: February 28th, 2017] [Originally Added On: February 28th, 2017]
- Superintelligent AI explains Softbank's push to raise a $100BN Vision Fund - TechCrunch [Last Updated On: February 28th, 2017] [Originally Added On: February 28th, 2017]
- Tech Leaders Raise Concern About the Dangers of AI - iDrop News [Last Updated On: March 1st, 2017] [Originally Added On: March 1st, 2017]
- Disruptive by Design: Siri, Tell Me a Joke. No, Not That One. - Signal Magazine [Last Updated On: March 1st, 2017] [Originally Added On: March 1st, 2017]
- Softbank CEO: The Singularity Will Happen by 2047 - Futurism [Last Updated On: March 1st, 2017] [Originally Added On: March 1st, 2017]
- Horst Simon to Present Supercomputers and Superintelligence at PASC17 in Lugano - insideHPC [Last Updated On: March 4th, 2017] [Originally Added On: March 4th, 2017]
- Why not all forms of artificial intelligence are equally scary - Vox [Last Updated On: March 8th, 2017] [Originally Added On: March 8th, 2017]
- US Navy reaches out to gamers to troubleshoot post-singularity world - Digital Trends [Last Updated On: March 19th, 2017] [Originally Added On: March 19th, 2017]
- This New Species of AI Wants to Be "Superintelligent" When She Grows Up - Big Think [Last Updated On: March 23rd, 2017] [Originally Added On: March 23rd, 2017]
- Luna, The Most Human-like AI, Wants To Become Superintelligent In Future - Fossbytes [Last Updated On: March 27th, 2017] [Originally Added On: March 27th, 2017]
- Friendly artificial intelligence - Wikipedia [Last Updated On: March 27th, 2017] [Originally Added On: March 27th, 2017]
- Banking bots should get their version of Asimov's Three Laws of Robotics - TNW [Last Updated On: March 29th, 2017] [Originally Added On: March 29th, 2017]
- The Nonparametric Intuition: Superintelligence and Design Methodology - Lifeboat Foundation (blog) [Last Updated On: April 7th, 2017] [Originally Added On: April 7th, 2017]
- Who is afraid of AI? - The Hindu [Last Updated On: April 7th, 2017] [Originally Added On: April 7th, 2017]
- Limits to the Nonparametric Intuition: Superintelligence and Ecology - Lifeboat Foundation (blog) [Last Updated On: April 12th, 2017] [Originally Added On: April 12th, 2017]
- The Guardian view on protein modelling: the answer to life, the universe and everything - The Guardian [Last Updated On: April 21st, 2017] [Originally Added On: April 21st, 2017]
- David Hasselhoff Stars in a New Short Filmand All His Lines Were Written by AI - Singularity Hub [Last Updated On: April 27th, 2017] [Originally Added On: April 27th, 2017]
- Apple's Tom Gruber, Co-Founder of Siri, Spoke at TED2017 Today about Augmented Memories and more - Patently Apple [Last Updated On: April 27th, 2017] [Originally Added On: April 27th, 2017]
- Superintelligence and Public Opinion - NewCo Shift [Last Updated On: April 27th, 2017] [Originally Added On: April 27th, 2017]
- Informatica Journal - Call for Special Issue on Superintelligence - Institute for Ethics and Emerging Technologies [Last Updated On: April 28th, 2017] [Originally Added On: April 28th, 2017]
- BRAVO 25: YOUR A.I. THERAPIST WILL SEE YOU NOW Comes to the Actors Company - Broadway World [Last Updated On: May 2nd, 2017] [Originally Added On: May 2nd, 2017]
- 'Artificial Superintelligence' is the First Game from the Makers of the Hilarious 'CARROT' Apps, Coming May 11th - Touch Arcade [Last Updated On: May 2nd, 2017] [Originally Added On: May 2nd, 2017]
- Multiple Intelligences, and Superintelligence - Freedom to Tinker [Last Updated On: May 6th, 2017] [Originally Added On: May 6th, 2017]
- You're invited: Strategies for an Artificially Superintelligent Future - FutureFive NZ [Last Updated On: May 11th, 2017] [Originally Added On: May 11th, 2017]
- U.S. Navy calls out to gamers for assistance with ... [Last Updated On: May 11th, 2017] [Originally Added On: May 11th, 2017]
- Artificial Superintelligence is an interesting Sci-Fi take on Reigns swiping mechanic - Pocket Gamer [Last Updated On: May 13th, 2017] [Originally Added On: May 13th, 2017]
- Listen, Meatbag! Artificial Superintelligence is a New Game Starring the Snarky Carrot AI - AppAdvice [Last Updated On: May 13th, 2017] [Originally Added On: May 13th, 2017]
- Artificial Superintelligence review - Reigns for a new generation - Pocket Gamer [Last Updated On: May 17th, 2017] [Originally Added On: May 17th, 2017]
- Artificial Superintelligence Review: Reigns Supreme? - Gamezebo [Last Updated On: May 18th, 2017] [Originally Added On: May 18th, 2017]
- Summoning the Demon: Why superintelligence is humanity's ... - GeekWire [Last Updated On: May 26th, 2017] [Originally Added On: May 26th, 2017]
- Today's Kids Could Live Through Machine Superintelligence, Martian Colonies, and a Nuclear Attack - Motherboard [Last Updated On: May 28th, 2017] [Originally Added On: May 28th, 2017]
- The AI Revolution: The Road to Superintelligence (PDF) [Last Updated On: June 3rd, 2017] [Originally Added On: June 3rd, 2017]
- A reply to Wait But Why on machine superintelligence [Last Updated On: June 3rd, 2017] [Originally Added On: June 3rd, 2017]
- Are You Ready for the AI Revolution and the Rise of Superintelligence? - TrendinTech [Last Updated On: June 7th, 2017] [Originally Added On: June 7th, 2017]
- Using AI to unlock human potential - EJ Insight [Last Updated On: June 9th, 2017] [Originally Added On: June 9th, 2017]
- Cars 3 gets back to what made the franchise adequate - Vox [Last Updated On: June 12th, 2017] [Originally Added On: June 12th, 2017]
- Facebook Chatbots Spontaneously Invent Their Own Non-Human ... - Interesting Engineering [Last Updated On: June 18th, 2017] [Originally Added On: June 18th, 2017]
- Effective Altruism Says You Can Save the Future by Making Money - Motherboard [Last Updated On: June 21st, 2017] [Originally Added On: June 21st, 2017]
- The bots are coming - The New Indian Express [Last Updated On: June 22nd, 2017] [Originally Added On: June 22nd, 2017]
- No need to fear Artificial Intelligence - Livemint - Livemint [Last Updated On: June 29th, 2017] [Originally Added On: June 29th, 2017]
- The AI Revolution: The Road to Superintelligence | Inverse [Last Updated On: July 3rd, 2017] [Originally Added On: July 3rd, 2017]
- Integrating disciplines 'key to dealing with digital revolution' - Times Higher Education (THE) [Last Updated On: July 4th, 2017] [Originally Added On: July 4th, 2017]
- To prevent artificial intelligence from going rogue, here is what Google is doing - Financial Express [Last Updated On: July 11th, 2017] [Originally Added On: July 11th, 2017]