Throughout 2017, I have been running polls on the publics appetite for risk regarding the pursuit of superintelligence. Ive been running these on Surveymonkey, paying for audiences so as to minimize distortions in the data. Ive spent nearly $10,000 on this project. I did this in about the most scientific way I could. It is not a passed around survey, but rather paid polling across the entire American spectrum.
All in all, America can perhaps be best characterized as excited about the prospect of a superintelligence explosion, but also deeply afraid, skeptical, and adamantly opposed to the idea that we should plow forth without any regulation or plan. This is, it seems to me, exactly what is happening right now.
You can view the entire dataset here. I welcome any comments. Im not a statistician, dont have a research assistant, and have a full-time job, so my ability to proof-read and double-check things is limited (though I have tried). If you have comments, you can tweet at me @rickwebb.
This is not an essay debating the likely outcome of humanitys pursuit of superintelligence. This is not an essay trying to convince you that its going to turn out one way or another. This is an article about democracy, risk, and the appetite for it.
Furthermore, this is not an essay about weak artificial intelligenceyour Alexa, or Siri, or the algorithms that guide you when using Waze. Artificial Intelligence comes in three flavors:
Virtually all of the public policy discussions, news, and polling has centered around the first type of AI: weak AI. This is the one that will make the robots that will take your jobs. The Obama administrations report on artificial intelligence, for example, dedicated only perhaps 3 paragraphs across its 45 pages to SAI. This was part of a larger push by the Obama administration, who also hosted several events. The primary focus there, too, was on weak AI. What little polling done on AI has been done primarily on weak AI.
But it is superintelligence that arguably poses the much larger risks for mankind. And we are further along than most people realize.
Let me ask you a question: if you were in the ballot booth, and you saw the following question on a ballot, how would you answer?
The situation is this: in the next 100 years or so, theres a chanceno one is sure how good of a chancethat humanity will develop machines that achieve, and then surpass, humans in intelligence levels. When we do that, most experts agree, there are two potential paths for humanity:
Theres a lot of hyperbole and terminology around the debate about pursuing human-level artificial intelligence. It can be confusing. To get up to speed, I strongly recommend you read this two-part primer on the AI dilemma by the wonderful blog Wait But Why (part 1, part 2). Please consider taking a moment to read some of the articles linked above (or bookmark them for later). However you feel about the topic, its probably worth it as a citizen to get up to speed on both sides of the debate, since arguably it will effect us all (or our children).
Now, if youve read all that, I suspect you have one of two responsesmuch like those outlined in the article. Youll read all the good stuff and get really into it and think that sounds great! I think that will happen!
Or you will read all the bad stuff and think that sounds terrible and plausible! I dont want that to happen!
And guess what! Good for you, because whichever side youve taken, there is some super genius out there agreeing with you.
Ive discussed these articles with lots of people. Heres what Ive found: by and large, enthusiasm in favor of AI depends on an individuals belief in the worst-case scenario. We, as humans, have a strange belief that we can predict the future, and if we, personally, predict a positive future, we assume thats the one thats going to happen. And if we predict a negative future, we assume thatll happen.
But if we stop and take a moment, we realize that this is hogwash. We know, intellectually, we cant predict the future, and we could be wrong.
So lets take a moment and acknowledge whats really going on in this scenario: experts pretty much see two potential new paths for humanity when it comes to AI: good and bad.
And the reality is there is some probability that each one of them may come true.
It might be 100% likely that only the good could ever happen. It might be 100% likely only the bad could ever happen. In reality, the odds are probably something other than 1000 or 0100. The odds might be, for example, 5050. We dont really know.
(There is, of course, the likelihood that neither will happen, in which case, cool. Humanity goes on as it was, and this article becomes moot. So we are ignoring that for now).
Furthermore, because of the confusion around weak AI, human-level AI, strong AI/Superintelligence and what have you, I decided I would boil down for the public the central debate to its core: hey, theres a tech out there, it might make us immortal, but it might kill us. What do you think? This is, after all, the core dilemma. The nut. The part of the problem that most calls for the publics input.
So, in the end, were right back to where we started from:
Now, in the question above, Im making up the 1 and 5 probability numbers. It might be one in 100. It might be one in two. We just dont know. NO ONE KNOWS. Remember this. Many, many people will try and convince you that they know. All they are doing is arguing their viewpoint. They dont really know. No one can predict the future. Again, remember this.
We are not arguing over whether or not this will happen in this essay. We are accepting the consensus of experts that it could happen. And we urge consideration of the fact that the actual likelihood it will happen is currently unknown.
This is also not the forum to discuss how we could ever even know the liklehood of an event in the future. Forecasting the future is, of course, an inexact science. Well never really know, for sure, the likelihood of a future event. There are numerous forecasting methodologies out there that scientists and decision-markers use. I make no opinion here. With regard to superintelligence, the Wait but Why essay does a good job going over some of the methods weve utilized in the past, such as polling scientists at conferences).
Ive been aware of the potential of this issue for decades. But like you, I thought it was far off. Not my generations problem. AI researchlike many areas of research my sci-fi inner child lovedhad been stalled for the last 3050 years. We had little progress in space exploration, self driving cars, solar power, virtual reality, electric cars, car planes, etc. Like these other areas, AI research seemed on pause. I suspect that was partially because of the brain drain caused by building the Internet, and partially because some problems proved more difficult than expected.
Yet, much like each of these fields, AI research has exploded in the last fiveten years. The field is back, and back with a vengeance.
Up to now, AI policy has been defined almost exclusively by AI researchers, policy wonks, and tech company executives. Even our own government has been, by and large, absent from the conversation. I asked one friend knowledgeable about the executive branchs handle on the situation and he said, in effect, that theyre not unaware, but they have more pressing matters.
A massive amount of AI research is being done, and most of humanity has no idea how far along we are on the journey. To be fair, the researchers involved often have some good reasons for why they are not shouting their research from the rooftops. They dont want to cause unnecessary alarm. They worry about the clamping down on their ability to publish what they do publish. The fact remains, that the public is, by and large, being left in the dark.
I believe that when facing a decision that affects the entirety of humanity at a fundamental levelnot just life or death but the very notion of existencewe all should be involved in the decision.
This is, admittedly, democratic. Many people believe in democracy in only a limited manner. They fret over the will of the masses, direct democracy, making decisions in the heat of the moment. This is all valid. Reasonable people can have a debate about these nuances. I do not seek to hash them all out here. Im not saying we need a worldwide vote.
I am, saying, however, that all of humanity should have a say in the pursuit of breakthroughs that put its very existence at risk. The will of the people should be our guide. And the better informed they are, the better decisions they will make.
There is a distinction between votes and polling. Polling guides policy, and voting, in its ideal form, affects behavior. A congresswoman may be in office because, say, 22% of all non-felon adults in her district put her there. She may then govern by listening to the will of the people as a whole through polls. Something similar should be applied here.
If this were classical economics, and humans were what John Stuart Mill dubbed homo economicusor perfectly rational beings, with all the relevant knowledge at handhumanity could simply calculate the risk potential and likelihood and measure that against the likelihood of potential benefits. We would then come up with a decision. Reality is more complex. First, the potential downside and upside are both, essentially, infinite in economic terms, thus throwing this equation out of whack. And secondly, of course, we do not actually know the likelihood that SAI will lead to humanitys destruction. Its a safe guess that that number exists, but we dont know it.
Luckily our very faultsthat we are not homo economicusalso leads to our strength in this situation: we can deal with fuzzy numbers and the notion of infinity. Our brains contain multitudes, to borrow from Walt Whitman.
What, then, is the level of acceptable risk that will cause humanity to, at least by consensus, accept our pursuit of superintelligence?
It came as a shock to me, then, that the population at large hasnt really been polled about its views on the potential of a super intelligence apocalypse. There are several polls about artificial intelligence (this one by the British Science Association is a good example), but not so many about the existential risk potentially inherent in pursuing superintelligence. Those that exist are generally in the same mold as this one by 60 Minutes, inquiring about its audiences favorite AI movies, and where one would hide from the robot insurrection. It also helpfully asks if one should fear the robots killing us more than ourselves. One could argue that this is a leading question, and in any case, its hardly useful for the development of public policy. Searching Google for superintelligence polling yields little other than polling of experts, and searching for superintelligence public opinion yields virtually nothing.
On the academic front, this December 2016 paper by Stanfords Ethan Fast and Microsofts Eric Hovitz does a superb job surveying the landscape, relying primarily on press mentions and press tone, while acknowledging that the polling is light, and not specifically focused on superintelligence. Nonetheless, it is a fascinating read.
All in all, though, data around the existential risk mankind may face with the onset of superintelligence, and Americans views on it, is sparse indeed.
So I set out to do it myself.
You can view my entire dataset here.
First, I set out to ask some top level questions about superintelligence research. Now, I confess, I am not a pollster. I know these questions are sort of leading. I did my best to keep them neutral, but Ive got my own biases. Nonetheless, it seemed worthwhile to just go ahead and ask a bunch of Americans what they think about the risks and potentials of superintelligence.
We asked four top-level questions regarding superintelligence research of 400 individuals:
At a top level, Americans seem to find the prospect of superintelligence and its benefits exciting, though it is not a ringing endorsement. Some 47% of Americans characterized themselves as excited on some level.
Again, I caution that this data is limited. Furthermore, I am not a statistics expert, so I cant say (for example) the margin of error when you poll a lot of people but at across many income levels and then analyze the subsets by income, but I suspect that its not as high as the base poll.
It would be awesome if someone started polling about this stuff. This is just one snapshot. Polls are more accurate over time.
And it would be amazing if people started polling other countries. Originally when I planned this research, I wanted to poll across countries, but Surveymonkey didnt have such functionality. Since I started in January, theyve begun offering some international polling. I hope someone gets on that. I am tapped out.
It would be great if people ran these polls at larger numbers, with better margins of error. Especially the poll of black Americans. Other subgroups, tooSurveymonkey doesnt offer much when it comes to Asian Americans, Hispanics and other minority groups.
So. What does all this mean? After all, its not like god will come down from on high and say Hey Americans! right now you have an 80% likelihood of not dying if you give this superintelligence thing a go! We will never, really, know the likelihood. But what this does tell us is that Americans are relatively risk averse in this regard (though the math is a bit wonky when we are dealing with infinite risk and infinite reward). This is not surprising. Modern behavioral economic research has shown that humans value what they have over what they might gain in the future.
We also see from the dataset that Americans are more skeptical of institutions pursuing superintelligence research on their own. I suspect if Americans knew the true extent of whats being done on this front, these trust numbers would continue to decline, but thats just a hunch. In any case, this data could be useful in institutions debating how and when to disclose their superintelligence research to the publicthere may some ticking time bombs surrounding the goodwill line item on some of these companies balance sheets.
America can perhaps be best characterized as excited about the prospect of a superintelligence explosion, but also deeply afraid, skeptical, and adamantly opposed to the idea that we should plow forth without any regulation or plan. This is, it seems to me, exactly what is happening right now.
Whatever your interpretation, its my hope that this can help spawn some efforts by policymakers, researchers, corporations and academic institutions to gauge the will of the people regarding the research they are supporting or undertaking. I conclude with a quote from Robert Oppenheimer, one of the inventors of the atomic bomb: When you see something that is technically sweet, you go ahead and do it, and you argue about what to do about it only after you have had your technical success. That is the way it was with the atomic bomb.
I pulled the Oppenheimer quote from a recent New Yorker article about CRISPR DNA editing and the scientist Kevin Esvelts efforts to bring the research into the open. We really need to think about the world we are entering. He says elsewhere, To an appalling degree, not that much has changed. Scientists still really dont care very much about what others think of their work.
Ill save my personal interpretation of the data for another essay. Ive tried to keep editorializing to a minimum. This is not to say that I havent formed opinions when looking at this data. I hope you do too.
Read this article:
Superintelligence and Public Opinion - NewCo Shift
- Superintelligence: Paths, Dangers, Strategies - Wikipedia ... [Last Updated On: June 13th, 2016] [Originally Added On: June 13th, 2016]
- Top Ten Cybernetic Upgrades Everyone Will Want [Last Updated On: June 17th, 2016] [Originally Added On: June 17th, 2016]
- Ethical Issues In Advanced Artificial Intelligence [Last Updated On: June 17th, 2016] [Originally Added On: June 17th, 2016]
- How Long Before Superintelligence? - Nick Bostrom [Last Updated On: June 17th, 2016] [Originally Added On: June 17th, 2016]
- Superintelligence - Wikipedia, the free encyclopedia [Last Updated On: June 17th, 2016] [Originally Added On: June 17th, 2016]
- Nick Bostrom's Home Page [Last Updated On: June 19th, 2016] [Originally Added On: June 19th, 2016]
- Superintelligence Audiobook | Nick Bostrom | Audible.com [Last Updated On: June 19th, 2016] [Originally Added On: June 19th, 2016]
- Superintelligence Audiobook | Nick Bostrom | Audible.com [Last Updated On: June 21st, 2016] [Originally Added On: June 21st, 2016]
- Superintelligence: Paths, Dangers, Strategies | KurzweilAI [Last Updated On: June 21st, 2016] [Originally Added On: June 21st, 2016]
- Superintelligence [Last Updated On: June 21st, 2016] [Originally Added On: June 21st, 2016]
- Superintelligence: Paths, Dangers, Strategies by Nick ... [Last Updated On: June 21st, 2016] [Originally Added On: June 21st, 2016]
- Superintelligence: Paths, Dangers, Strategies by Nick Bostrom ... [Last Updated On: June 25th, 2016] [Originally Added On: June 25th, 2016]
- Superintelligence: Paths, Dangers, Strategies | KurzweilAI [Last Updated On: June 25th, 2016] [Originally Added On: June 25th, 2016]
- Parallel universes, the Matrix, and superintelligence ... [Last Updated On: June 28th, 2016] [Originally Added On: June 28th, 2016]
- Superintelligence - Nick Bostrom - Oxford University Press [Last Updated On: July 14th, 2016] [Originally Added On: July 14th, 2016]
- 'Superintelligence' enjoyable read | Community ... [Last Updated On: July 29th, 2016] [Originally Added On: July 29th, 2016]
- How Humanity Might Co-Exist with Artificial Superintelligence [Last Updated On: July 31st, 2016] [Originally Added On: July 31st, 2016]
- Future of AI 6. Discussion of 'Superintelligence: Paths ... [Last Updated On: August 10th, 2016] [Originally Added On: August 10th, 2016]
- Superintelligence by Nick Bostrom and A Rough Ride to the ... [Last Updated On: September 6th, 2016] [Originally Added On: September 6th, 2016]
- Superintelligence: paths, dangers, strategies | University ... [Last Updated On: October 17th, 2016] [Originally Added On: October 17th, 2016]
- Superintelligence: Paths, Dangers, Strategies: Amazon.co.uk ... [Last Updated On: October 27th, 2016] [Originally Added On: October 27th, 2016]
- Superintelligence | Guardian Bookshop [Last Updated On: October 27th, 2016] [Originally Added On: October 27th, 2016]
- The Artificial Intelligence Revolution: Part 2 - Wait But Why [Last Updated On: October 27th, 2016] [Originally Added On: October 27th, 2016]
- Superintelligence: Paths, Dangers, Strategies: Amazon.co ... [Last Updated On: November 17th, 2016] [Originally Added On: November 17th, 2016]
- Superintelligence: The Idea That Eats Smart People [Last Updated On: December 26th, 2016] [Originally Added On: December 26th, 2016]
- Will Machines Ever Outthink Us? - Huffington Post [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Elon Musk's Surprising Reason Why Everyone Will Be Equal in the ... - Big Think [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Experts have come up with 23 guidelines to avoid an AI apocalypse ... - ScienceAlert [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Stephen Hawking and Elon Musk Endorse 23 Asilomar Principles ... - Inverse [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- SoftBank's Fantastical Future Still Rooted in the Now - Wall Street Journal [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- The Moment When Humans Lose Control Of AI - Vocativ [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Game Theory: Google tests AIs to see whether they'll fight or work together - Neowin [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Simulation hypothesis: The smart person's guide - TechRepublic [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- Another Expert Joins Stephen Hawking and Elon Musk in Warning About the Dangers of AI - Futurism [Last Updated On: February 14th, 2017] [Originally Added On: February 14th, 2017]
- Artificial Intelligence Is Not a ThreatYet - Scientific American [Last Updated On: February 14th, 2017] [Originally Added On: February 14th, 2017]
- Elon Musk - 2 Things Humans Need to Do to Have a Good Future - Big Think [Last Updated On: February 26th, 2017] [Originally Added On: February 26th, 2017]
- Don't Fear Superintelligent AICCT News - CCT News [Last Updated On: February 26th, 2017] [Originally Added On: February 26th, 2017]
- Building A 'Collective Superintelligence' For Doctors And Patients Around The World - Forbes [Last Updated On: February 28th, 2017] [Originally Added On: February 28th, 2017]
- Superintelligent AI explains Softbank's push to raise a $100BN Vision Fund - TechCrunch [Last Updated On: February 28th, 2017] [Originally Added On: February 28th, 2017]
- Tech Leaders Raise Concern About the Dangers of AI - iDrop News [Last Updated On: March 1st, 2017] [Originally Added On: March 1st, 2017]
- Disruptive by Design: Siri, Tell Me a Joke. No, Not That One. - Signal Magazine [Last Updated On: March 1st, 2017] [Originally Added On: March 1st, 2017]
- Softbank CEO: The Singularity Will Happen by 2047 - Futurism [Last Updated On: March 1st, 2017] [Originally Added On: March 1st, 2017]
- Horst Simon to Present Supercomputers and Superintelligence at PASC17 in Lugano - insideHPC [Last Updated On: March 4th, 2017] [Originally Added On: March 4th, 2017]
- Why not all forms of artificial intelligence are equally scary - Vox [Last Updated On: March 8th, 2017] [Originally Added On: March 8th, 2017]
- US Navy reaches out to gamers to troubleshoot post-singularity world - Digital Trends [Last Updated On: March 19th, 2017] [Originally Added On: March 19th, 2017]
- This New Species of AI Wants to Be "Superintelligent" When She Grows Up - Big Think [Last Updated On: March 23rd, 2017] [Originally Added On: March 23rd, 2017]
- Luna, The Most Human-like AI, Wants To Become Superintelligent In Future - Fossbytes [Last Updated On: March 27th, 2017] [Originally Added On: March 27th, 2017]
- Friendly artificial intelligence - Wikipedia [Last Updated On: March 27th, 2017] [Originally Added On: March 27th, 2017]
- Banking bots should get their version of Asimov's Three Laws of Robotics - TNW [Last Updated On: March 29th, 2017] [Originally Added On: March 29th, 2017]
- The Nonparametric Intuition: Superintelligence and Design Methodology - Lifeboat Foundation (blog) [Last Updated On: April 7th, 2017] [Originally Added On: April 7th, 2017]
- Who is afraid of AI? - The Hindu [Last Updated On: April 7th, 2017] [Originally Added On: April 7th, 2017]
- Limits to the Nonparametric Intuition: Superintelligence and Ecology - Lifeboat Foundation (blog) [Last Updated On: April 12th, 2017] [Originally Added On: April 12th, 2017]
- The Guardian view on protein modelling: the answer to life, the universe and everything - The Guardian [Last Updated On: April 21st, 2017] [Originally Added On: April 21st, 2017]
- David Hasselhoff Stars in a New Short Filmand All His Lines Were Written by AI - Singularity Hub [Last Updated On: April 27th, 2017] [Originally Added On: April 27th, 2017]
- Apple's Tom Gruber, Co-Founder of Siri, Spoke at TED2017 Today about Augmented Memories and more - Patently Apple [Last Updated On: April 27th, 2017] [Originally Added On: April 27th, 2017]
- Informatica Journal - Call for Special Issue on Superintelligence - Institute for Ethics and Emerging Technologies [Last Updated On: April 28th, 2017] [Originally Added On: April 28th, 2017]
- BRAVO 25: YOUR A.I. THERAPIST WILL SEE YOU NOW Comes to the Actors Company - Broadway World [Last Updated On: May 2nd, 2017] [Originally Added On: May 2nd, 2017]
- 'Artificial Superintelligence' is the First Game from the Makers of the Hilarious 'CARROT' Apps, Coming May 11th - Touch Arcade [Last Updated On: May 2nd, 2017] [Originally Added On: May 2nd, 2017]
- Multiple Intelligences, and Superintelligence - Freedom to Tinker [Last Updated On: May 6th, 2017] [Originally Added On: May 6th, 2017]
- You're invited: Strategies for an Artificially Superintelligent Future - FutureFive NZ [Last Updated On: May 11th, 2017] [Originally Added On: May 11th, 2017]
- U.S. Navy calls out to gamers for assistance with ... [Last Updated On: May 11th, 2017] [Originally Added On: May 11th, 2017]
- Artificial Superintelligence is an interesting Sci-Fi take on Reigns swiping mechanic - Pocket Gamer [Last Updated On: May 13th, 2017] [Originally Added On: May 13th, 2017]
- Listen, Meatbag! Artificial Superintelligence is a New Game Starring the Snarky Carrot AI - AppAdvice [Last Updated On: May 13th, 2017] [Originally Added On: May 13th, 2017]
- Artificial Superintelligence review - Reigns for a new generation - Pocket Gamer [Last Updated On: May 17th, 2017] [Originally Added On: May 17th, 2017]
- Artificial Superintelligence Review: Reigns Supreme? - Gamezebo [Last Updated On: May 18th, 2017] [Originally Added On: May 18th, 2017]
- Summoning the Demon: Why superintelligence is humanity's ... - GeekWire [Last Updated On: May 26th, 2017] [Originally Added On: May 26th, 2017]
- Summoning the Demon: Why superintelligence is humanity's biggest threat - GeekWire [Last Updated On: May 26th, 2017] [Originally Added On: May 26th, 2017]
- Today's Kids Could Live Through Machine Superintelligence, Martian Colonies, and a Nuclear Attack - Motherboard [Last Updated On: May 28th, 2017] [Originally Added On: May 28th, 2017]
- The AI Revolution: The Road to Superintelligence (PDF) [Last Updated On: June 3rd, 2017] [Originally Added On: June 3rd, 2017]
- A reply to Wait But Why on machine superintelligence [Last Updated On: June 3rd, 2017] [Originally Added On: June 3rd, 2017]
- Are You Ready for the AI Revolution and the Rise of Superintelligence? - TrendinTech [Last Updated On: June 7th, 2017] [Originally Added On: June 7th, 2017]
- Using AI to unlock human potential - EJ Insight [Last Updated On: June 9th, 2017] [Originally Added On: June 9th, 2017]
- Cars 3 gets back to what made the franchise adequate - Vox [Last Updated On: June 12th, 2017] [Originally Added On: June 12th, 2017]
- Facebook Chatbots Spontaneously Invent Their Own Non-Human ... - Interesting Engineering [Last Updated On: June 18th, 2017] [Originally Added On: June 18th, 2017]
- Effective Altruism Says You Can Save the Future by Making Money - Motherboard [Last Updated On: June 21st, 2017] [Originally Added On: June 21st, 2017]
- The bots are coming - The New Indian Express [Last Updated On: June 22nd, 2017] [Originally Added On: June 22nd, 2017]
- No need to fear Artificial Intelligence - Livemint - Livemint [Last Updated On: June 29th, 2017] [Originally Added On: June 29th, 2017]
- The AI Revolution: The Road to Superintelligence | Inverse [Last Updated On: July 3rd, 2017] [Originally Added On: July 3rd, 2017]
- Integrating disciplines 'key to dealing with digital revolution' - Times Higher Education (THE) [Last Updated On: July 4th, 2017] [Originally Added On: July 4th, 2017]
- To prevent artificial intelligence from going rogue, here is what Google is doing - Financial Express [Last Updated On: July 11th, 2017] [Originally Added On: July 11th, 2017]