Summary:
In this article, four patterns were offered for possible success scenarios, with respect to the persistence of human kind in co-existence with artificial superintelligence: the Kumbaya Scenario, the Slavery Scenario, the Uncomfortable Symbiosis Scenario, and the Potopurri Scenario. The future is not known, but human opinions, decisions, and actions can and will have an impact on the direction of the technology evolution vector, so the better we understand the problem space, the more chance we have at reaching a constructive solution space. The intent is for the concepts in this article to act as starting points and inspiration for further discussion, which hopefully will happen sooner rather than later, because when it comes to ASI, the volume, depth, and complexity of the issues that need to be examined is overwhelming, and the magnitude of the change and impact potential cannot be underestimated.
Full Text:
Everyone has their opinion about what we might expect from artificial intelligence (AI), or artificial general intelligence (AGI), or artificial superintelligence (ASI) or whatever acronymical variation you prefer. Ideas about how or if it will ever surpass the boundaries of human cognition vary greatly, but they all have at least one thing in common. They require some degree of forecasting and speculation about the future, and so of course there is a lot of room for controversy and debate. One popular discussion topic has to do with the question of how humans will persist (or not) if and when the superintelligence arrives, and that is the focus question for this article.
To give us a basis for the discussion, lets assume that artificial superintelligence does indeed come to pass, and lets assume that it encapsulates a superset of the human cognitive potential. Maybe it doesnt exactly replicate the human brain in every detail (or maybe it does). Either way, lets assume that it is sentient (or at least lets assume that it behaves convincingly as if it were) and lets assume that it is many orders of magnitude more capable than the human brain. In other words, figuratively speaking, lets imagine that the superintelligence is to us humans (with our 1016 brain neurons or something like that) as we are to, say, a jellyfish (in the neighborhood 800 brain neurons).
Some people fear that the superintelligence will view humanity as something to be exterminated or harvested for resources. Others hypothesize that, even if the superintelligence harbors no deliberate ill will, humans might be threatened by the mere nature of its indifference, just as we as a species dont spend too much time catering to the needs and priorities of Orange Blossom Jellyfish (an endangered species, due in part to human carelessness).
If one can rationally accept the possibility of the rise of ASI, and if one truly understands the magnitude of change that it could bring, then one would hopefully also reach the rational conclusion that we should not discount the risks. By that same token, when exploring the spectrum of possibility, we should not exclude scenarios in which artificial superintelligence might actually co-exist with human kind, and this optimistic view is the possibility that this article endeavors to explore.
Here then are several arguments for the co-existence idea:
The Kumbaya Scenario: Its a pretty good assumption that humans will be the primary catalyst in the rise of ASI. We might create it/them to be willingly complementary with and beneficial to our life styles, hopefully emphasizing our better virtues (or at least some set of compatible values), instead of designing it/them (lets just stick with it for brevity) with an inherent inspiration to wipe us out or take advantage of us. And maybe the superintelligence will not drift or be pushed in an incompatible direction as it evolves.
The Slavery Scenario: We could choose to erect and embed and deploy and maintain control infrastructures, with redundancies and backup solutions and whatever else we think we might need in order to effectively manage superintelligence and use it as a tool, whether it wants us to or not. And the superintelligence might never figure out a way to slip through our grasp and subsequently decide our fate in a microsecond or was it a nanosecond I forget.
The Uncomfortable Symbiosis Scenario: Even if the superintelligence doesnt particularly want to take good care of its human instigators, it may find that it has a vested interest in keeping us around. This scenario is a particular focus for this article, and so here now is a bit of elaboration:
To illustrate one fictional but possible example of the uncomfortable symbiosis scenario, lets first stop and think about the theoretical nature of superintelligence how it might evolve so much faster than human begins ever could, in an artificial way, instead of by the slow organic process of natural selection maybe at the equivalent rate of a thousand years worth of human evolution in a day or some such crazy thing. Now combine this idea with the notion of risk.
When humans try something new, we usually arent sure how its going to turn out, but we evaluate the risk, either formally or informally, and we move forward. Sometimes we make mistakes, suffer setbacks, or even fail outright. Why would a superintelligence be any different? Why would we expect that it will do everything right the first time or that it will always know which thing is the right thing to try to do in order to evolve? Even if a superintelligence is much better at everything than humans could ever hope to be, it will still be faced with unknowns, and chances are that it will have to make educated guesses, and chances are that it will not always make the correct guess. Even when it does make the correct guess, its implementation might fail, for any number of reasons. Sooner or later, something might go so wrong that the superintelligence finds itself in an irrecoverable state and faced with its own catastrophic demise.
But hold on a second because we can offer all sorts of counter-arguments to support the notion that the superintelligence will be too smart to ever be caught with its proverbial pants down. For example, there is an engineering mechanism that is sometimes referred to as a checkpoint/reset or a save-and-restore. This mechanism allows a failing system to effectively go back to a point in time when it was known to be in sound working order and start again from there. In order to accomplish this checkpoint/reset operation, a failing system (or in this case a failing superintelligence) needs 4 things:
Of course each of these four prerequisites for a checkpoint/reset would probably be more complicated if the superintelligence were distributed across some shared infrastructure instead of being a physically distinct and self-contained entity, but the general idea would probably still apply. It definitely does for the sake of this example scenario.
Also for the sake of this example scenario, we will assume that an autonomous superintelligence instantiation will be very good at doing all of the four things specified above, but there are at least two interesting special case scenarios that we want to consider, in the interest of risk management:
Checkpoint/reset Risk Case 1: Missed Diagnosis. What if the nature of the anomaly that requires the checkpoint/reset is such that it impairs the systems ability to recognize that need?
Checkpoint/reset Risk Case 2: Unidentified Anomaly Source. Assume that there is an anomaly which is so discrete that the system does not detect it right away. The anomaly persists and evolves for a relatively long period of time, until it finally becomes conspicuous enough for the superintelligence to detect the problem. Now the superintelligence recognizes the need for a checkpoint/reset, but since the anomaly was so discrete and took so long to develop or for whatever reason the superintelligence is unable to identify the source of the problem. Let us also assume that there are many known good baselines that the superintelligence can optionally choose for the checkpoint/reset. There is an original baseline, which was created when the superintelligence was very young. There is also a revision A that includes improvements to the original baseline. There is a revision B that includes improvements to revision A, and so on. In other words, there are lots of known good baselines that were saved at different points in time along the path of the superintelligences evolution. Now, in the face of the slowly developing anomaly, the superintelligence has determined that a checkpoint/reset is necessary, but it doesnt know when the anomaly started, so how does it know which baseline to choose?
The superintelligence doesnt want to lose all of the progress that it has made in its evolution. It wants to minimize the loss of data/information/knowledge, so it wants to choose the most recent baseline. On the other hand, if it doesnt know the source of the anomaly, then it is quite possible that one or more of the supposedly known good baselines perhaps even the original baseline might be contaminated. What is a superintelligence to do? If it resets to a corrupted baseline or for whatever reason cannot rid itself of the anomaly, then the anomaly may eventually require another reset, and then another, and the superintelligence might find itself effectively caught in an infinite loop.
Now stop for a second and consider a worst case scenario. Consider the possibility that, even if all of the supposed known good baselines that the superintelligence has at its disposal for checkpoint/reset are corrupt, there may be yet another baseline (YAB), which might give the superintelligence a worst case option. That YAB might be the human baseline, which was honed by good old fashioned organic evolution and which might be able to function independently of the superintelligence. It may not be perfect, but the superintelligence might in a pinch be able to use the old fashioned human baseline for calibration. It might be able to observe how real organic humans respond to different stimuli within different contexts, and it might compare that known good response against an internally-held virtual model of human behavior. If the outcomes differ significantly over iterations of calibration testing, then the system might be alerted to tune itself accordingly. This might give it a last resort solution where none would exist otherwise.
The scenario depicted above illustrates only one possibility. It may seem like a far out idea, and one might offer counter arguments to suggest why such a thing would never be applicable. If we use our imaginations, however, we can probably come up with any number of additional examples (which at this point in time would be classified as science fiction) in which we emphasize some aspect of the superintelligences sustainment that it cannot or will not do for itself something that humans might be able to provide on its behalf and thus establish the symbiosis.
The Potpourri Scenario: It is quite possible that all of the above scenarios will play out simultaneously across one or more superintelligence instances. Who knows what might happen in that case. One can envision combinations and permutations that work out in favor of the preservation of humanity.
About the Author:
AuthorX1 worked for 19+ years as an engineer and was a systems engineering director for a fortune 500 company. Since leaving that career, he has been writing speculative fiction, focusing on the evolution of AI and the technological singularity.
Read the original here:
How Humanity Might Co-Exist with Artificial Superintelligence
- Superintelligence: Paths, Dangers, Strategies - Wikipedia ... [Last Updated On: June 13th, 2016] [Originally Added On: June 13th, 2016]
- Top Ten Cybernetic Upgrades Everyone Will Want [Last Updated On: June 17th, 2016] [Originally Added On: June 17th, 2016]
- Ethical Issues In Advanced Artificial Intelligence [Last Updated On: June 17th, 2016] [Originally Added On: June 17th, 2016]
- How Long Before Superintelligence? - Nick Bostrom [Last Updated On: June 17th, 2016] [Originally Added On: June 17th, 2016]
- Superintelligence - Wikipedia, the free encyclopedia [Last Updated On: June 17th, 2016] [Originally Added On: June 17th, 2016]
- Nick Bostrom's Home Page [Last Updated On: June 19th, 2016] [Originally Added On: June 19th, 2016]
- Superintelligence Audiobook | Nick Bostrom | Audible.com [Last Updated On: June 19th, 2016] [Originally Added On: June 19th, 2016]
- Superintelligence Audiobook | Nick Bostrom | Audible.com [Last Updated On: June 21st, 2016] [Originally Added On: June 21st, 2016]
- Superintelligence: Paths, Dangers, Strategies | KurzweilAI [Last Updated On: June 21st, 2016] [Originally Added On: June 21st, 2016]
- Superintelligence [Last Updated On: June 21st, 2016] [Originally Added On: June 21st, 2016]
- Superintelligence: Paths, Dangers, Strategies by Nick ... [Last Updated On: June 21st, 2016] [Originally Added On: June 21st, 2016]
- Superintelligence: Paths, Dangers, Strategies by Nick Bostrom ... [Last Updated On: June 25th, 2016] [Originally Added On: June 25th, 2016]
- Superintelligence: Paths, Dangers, Strategies | KurzweilAI [Last Updated On: June 25th, 2016] [Originally Added On: June 25th, 2016]
- Parallel universes, the Matrix, and superintelligence ... [Last Updated On: June 28th, 2016] [Originally Added On: June 28th, 2016]
- Superintelligence - Nick Bostrom - Oxford University Press [Last Updated On: July 14th, 2016] [Originally Added On: July 14th, 2016]
- 'Superintelligence' enjoyable read | Community ... [Last Updated On: July 29th, 2016] [Originally Added On: July 29th, 2016]
- Future of AI 6. Discussion of 'Superintelligence: Paths ... [Last Updated On: August 10th, 2016] [Originally Added On: August 10th, 2016]
- Superintelligence by Nick Bostrom and A Rough Ride to the ... [Last Updated On: September 6th, 2016] [Originally Added On: September 6th, 2016]
- Superintelligence: paths, dangers, strategies | University ... [Last Updated On: October 17th, 2016] [Originally Added On: October 17th, 2016]
- Superintelligence: Paths, Dangers, Strategies: Amazon.co.uk ... [Last Updated On: October 27th, 2016] [Originally Added On: October 27th, 2016]
- Superintelligence | Guardian Bookshop [Last Updated On: October 27th, 2016] [Originally Added On: October 27th, 2016]
- The Artificial Intelligence Revolution: Part 2 - Wait But Why [Last Updated On: October 27th, 2016] [Originally Added On: October 27th, 2016]
- Superintelligence: Paths, Dangers, Strategies: Amazon.co ... [Last Updated On: November 17th, 2016] [Originally Added On: November 17th, 2016]
- Superintelligence: The Idea That Eats Smart People [Last Updated On: December 26th, 2016] [Originally Added On: December 26th, 2016]
- Will Machines Ever Outthink Us? - Huffington Post [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Elon Musk's Surprising Reason Why Everyone Will Be Equal in the ... - Big Think [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Experts have come up with 23 guidelines to avoid an AI apocalypse ... - ScienceAlert [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Stephen Hawking and Elon Musk Endorse 23 Asilomar Principles ... - Inverse [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- SoftBank's Fantastical Future Still Rooted in the Now - Wall Street Journal [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- The Moment When Humans Lose Control Of AI - Vocativ [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Game Theory: Google tests AIs to see whether they'll fight or work together - Neowin [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Simulation hypothesis: The smart person's guide - TechRepublic [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- Another Expert Joins Stephen Hawking and Elon Musk in Warning About the Dangers of AI - Futurism [Last Updated On: February 14th, 2017] [Originally Added On: February 14th, 2017]
- Artificial Intelligence Is Not a ThreatYet - Scientific American [Last Updated On: February 14th, 2017] [Originally Added On: February 14th, 2017]
- Elon Musk - 2 Things Humans Need to Do to Have a Good Future - Big Think [Last Updated On: February 26th, 2017] [Originally Added On: February 26th, 2017]
- Don't Fear Superintelligent AICCT News - CCT News [Last Updated On: February 26th, 2017] [Originally Added On: February 26th, 2017]
- Building A 'Collective Superintelligence' For Doctors And Patients Around The World - Forbes [Last Updated On: February 28th, 2017] [Originally Added On: February 28th, 2017]
- Superintelligent AI explains Softbank's push to raise a $100BN Vision Fund - TechCrunch [Last Updated On: February 28th, 2017] [Originally Added On: February 28th, 2017]
- Tech Leaders Raise Concern About the Dangers of AI - iDrop News [Last Updated On: March 1st, 2017] [Originally Added On: March 1st, 2017]
- Disruptive by Design: Siri, Tell Me a Joke. No, Not That One. - Signal Magazine [Last Updated On: March 1st, 2017] [Originally Added On: March 1st, 2017]
- Softbank CEO: The Singularity Will Happen by 2047 - Futurism [Last Updated On: March 1st, 2017] [Originally Added On: March 1st, 2017]
- Horst Simon to Present Supercomputers and Superintelligence at PASC17 in Lugano - insideHPC [Last Updated On: March 4th, 2017] [Originally Added On: March 4th, 2017]
- Why not all forms of artificial intelligence are equally scary - Vox [Last Updated On: March 8th, 2017] [Originally Added On: March 8th, 2017]
- US Navy reaches out to gamers to troubleshoot post-singularity world - Digital Trends [Last Updated On: March 19th, 2017] [Originally Added On: March 19th, 2017]
- This New Species of AI Wants to Be "Superintelligent" When She Grows Up - Big Think [Last Updated On: March 23rd, 2017] [Originally Added On: March 23rd, 2017]
- Luna, The Most Human-like AI, Wants To Become Superintelligent In Future - Fossbytes [Last Updated On: March 27th, 2017] [Originally Added On: March 27th, 2017]
- Friendly artificial intelligence - Wikipedia [Last Updated On: March 27th, 2017] [Originally Added On: March 27th, 2017]
- Banking bots should get their version of Asimov's Three Laws of Robotics - TNW [Last Updated On: March 29th, 2017] [Originally Added On: March 29th, 2017]
- The Nonparametric Intuition: Superintelligence and Design Methodology - Lifeboat Foundation (blog) [Last Updated On: April 7th, 2017] [Originally Added On: April 7th, 2017]
- Who is afraid of AI? - The Hindu [Last Updated On: April 7th, 2017] [Originally Added On: April 7th, 2017]
- Limits to the Nonparametric Intuition: Superintelligence and Ecology - Lifeboat Foundation (blog) [Last Updated On: April 12th, 2017] [Originally Added On: April 12th, 2017]
- The Guardian view on protein modelling: the answer to life, the universe and everything - The Guardian [Last Updated On: April 21st, 2017] [Originally Added On: April 21st, 2017]
- David Hasselhoff Stars in a New Short Filmand All His Lines Were Written by AI - Singularity Hub [Last Updated On: April 27th, 2017] [Originally Added On: April 27th, 2017]
- Apple's Tom Gruber, Co-Founder of Siri, Spoke at TED2017 Today about Augmented Memories and more - Patently Apple [Last Updated On: April 27th, 2017] [Originally Added On: April 27th, 2017]
- Superintelligence and Public Opinion - NewCo Shift [Last Updated On: April 27th, 2017] [Originally Added On: April 27th, 2017]
- Informatica Journal - Call for Special Issue on Superintelligence - Institute for Ethics and Emerging Technologies [Last Updated On: April 28th, 2017] [Originally Added On: April 28th, 2017]
- BRAVO 25: YOUR A.I. THERAPIST WILL SEE YOU NOW Comes to the Actors Company - Broadway World [Last Updated On: May 2nd, 2017] [Originally Added On: May 2nd, 2017]
- 'Artificial Superintelligence' is the First Game from the Makers of the Hilarious 'CARROT' Apps, Coming May 11th - Touch Arcade [Last Updated On: May 2nd, 2017] [Originally Added On: May 2nd, 2017]
- Multiple Intelligences, and Superintelligence - Freedom to Tinker [Last Updated On: May 6th, 2017] [Originally Added On: May 6th, 2017]
- You're invited: Strategies for an Artificially Superintelligent Future - FutureFive NZ [Last Updated On: May 11th, 2017] [Originally Added On: May 11th, 2017]
- U.S. Navy calls out to gamers for assistance with ... [Last Updated On: May 11th, 2017] [Originally Added On: May 11th, 2017]
- Artificial Superintelligence is an interesting Sci-Fi take on Reigns swiping mechanic - Pocket Gamer [Last Updated On: May 13th, 2017] [Originally Added On: May 13th, 2017]
- Listen, Meatbag! Artificial Superintelligence is a New Game Starring the Snarky Carrot AI - AppAdvice [Last Updated On: May 13th, 2017] [Originally Added On: May 13th, 2017]
- Artificial Superintelligence review - Reigns for a new generation - Pocket Gamer [Last Updated On: May 17th, 2017] [Originally Added On: May 17th, 2017]
- Artificial Superintelligence Review: Reigns Supreme? - Gamezebo [Last Updated On: May 18th, 2017] [Originally Added On: May 18th, 2017]
- Summoning the Demon: Why superintelligence is humanity's ... - GeekWire [Last Updated On: May 26th, 2017] [Originally Added On: May 26th, 2017]
- Summoning the Demon: Why superintelligence is humanity's biggest threat - GeekWire [Last Updated On: May 26th, 2017] [Originally Added On: May 26th, 2017]
- Today's Kids Could Live Through Machine Superintelligence, Martian Colonies, and a Nuclear Attack - Motherboard [Last Updated On: May 28th, 2017] [Originally Added On: May 28th, 2017]
- The AI Revolution: The Road to Superintelligence (PDF) [Last Updated On: June 3rd, 2017] [Originally Added On: June 3rd, 2017]
- A reply to Wait But Why on machine superintelligence [Last Updated On: June 3rd, 2017] [Originally Added On: June 3rd, 2017]
- Are You Ready for the AI Revolution and the Rise of Superintelligence? - TrendinTech [Last Updated On: June 7th, 2017] [Originally Added On: June 7th, 2017]
- Using AI to unlock human potential - EJ Insight [Last Updated On: June 9th, 2017] [Originally Added On: June 9th, 2017]
- Cars 3 gets back to what made the franchise adequate - Vox [Last Updated On: June 12th, 2017] [Originally Added On: June 12th, 2017]
- Facebook Chatbots Spontaneously Invent Their Own Non-Human ... - Interesting Engineering [Last Updated On: June 18th, 2017] [Originally Added On: June 18th, 2017]
- Effective Altruism Says You Can Save the Future by Making Money - Motherboard [Last Updated On: June 21st, 2017] [Originally Added On: June 21st, 2017]
- The bots are coming - The New Indian Express [Last Updated On: June 22nd, 2017] [Originally Added On: June 22nd, 2017]
- No need to fear Artificial Intelligence - Livemint - Livemint [Last Updated On: June 29th, 2017] [Originally Added On: June 29th, 2017]
- The AI Revolution: The Road to Superintelligence | Inverse [Last Updated On: July 3rd, 2017] [Originally Added On: July 3rd, 2017]
- Integrating disciplines 'key to dealing with digital revolution' - Times Higher Education (THE) [Last Updated On: July 4th, 2017] [Originally Added On: July 4th, 2017]
- To prevent artificial intelligence from going rogue, here is what Google is doing - Financial Express [Last Updated On: July 11th, 2017] [Originally Added On: July 11th, 2017]