Superintelligence and Public Opinion – NewCo Shift

Posted: April 27, 2017 at 2:24 am

Throughout 2017, I have been running polls on the publics appetite for risk regarding the pursuit of superintelligence. Ive been running these on Surveymonkey, paying for audiences so as to minimize distortions in the data. Ive spent nearly $10,000 on this project. I did this in about the most scientific way I could. It is not a passed around survey, but rather paid polling across the entire American spectrum.

All in all, America can perhaps be best characterized as excited about the prospect of a superintelligence explosion, but also deeply afraid, skeptical, and adamantly opposed to the idea that we should plow forth without any regulation or plan. This is, it seems to me, exactly what is happening right now.

You can view the entire dataset here. I welcome any comments. Im not a statistician, dont have a research assistant, and have a full-time job, so my ability to proof-read and double-check things is limited (though I have tried). If you have comments, you can tweet at me @rickwebb.

This is not an essay debating the likely outcome of humanitys pursuit of superintelligence. This is not an essay trying to convince you that its going to turn out one way or another. This is an article about democracy, risk, and the appetite for it.

Furthermore, this is not an essay about weak artificial intelligenceyour Alexa, or Siri, or the algorithms that guide you when using Waze. Artificial Intelligence comes in three flavors:

Virtually all of the public policy discussions, news, and polling has centered around the first type of AI: weak AI. This is the one that will make the robots that will take your jobs. The Obama administrations report on artificial intelligence, for example, dedicated only perhaps 3 paragraphs across its 45 pages to SAI. This was part of a larger push by the Obama administration, who also hosted several events. The primary focus there, too, was on weak AI. What little polling done on AI has been done primarily on weak AI.

But it is superintelligence that arguably poses the much larger risks for mankind. And we are further along than most people realize.

Let me ask you a question: if you were in the ballot booth, and you saw the following question on a ballot, how would you answer?

The situation is this: in the next 100 years or so, theres a chanceno one is sure how good of a chancethat humanity will develop machines that achieve, and then surpass, humans in intelligence levels. When we do that, most experts agree, there are two potential paths for humanity:

Theres a lot of hyperbole and terminology around the debate about pursuing human-level artificial intelligence. It can be confusing. To get up to speed, I strongly recommend you read this two-part primer on the AI dilemma by the wonderful blog Wait But Why (part 1, part 2). Please consider taking a moment to read some of the articles linked above (or bookmark them for later). However you feel about the topic, its probably worth it as a citizen to get up to speed on both sides of the debate, since arguably it will effect us all (or our children).

Now, if youve read all that, I suspect you have one of two responsesmuch like those outlined in the article. Youll read all the good stuff and get really into it and think that sounds great! I think that will happen!

Or you will read all the bad stuff and think that sounds terrible and plausible! I dont want that to happen!

And guess what! Good for you, because whichever side youve taken, there is some super genius out there agreeing with you.

Ive discussed these articles with lots of people. Heres what Ive found: by and large, enthusiasm in favor of AI depends on an individuals belief in the worst-case scenario. We, as humans, have a strange belief that we can predict the future, and if we, personally, predict a positive future, we assume thats the one thats going to happen. And if we predict a negative future, we assume thatll happen.

But if we stop and take a moment, we realize that this is hogwash. We know, intellectually, we cant predict the future, and we could be wrong.

So lets take a moment and acknowledge whats really going on in this scenario: experts pretty much see two potential new paths for humanity when it comes to AI: good and bad.

And the reality is there is some probability that each one of them may come true.

It might be 100% likely that only the good could ever happen. It might be 100% likely only the bad could ever happen. In reality, the odds are probably something other than 1000 or 0100. The odds might be, for example, 5050. We dont really know.

(There is, of course, the likelihood that neither will happen, in which case, cool. Humanity goes on as it was, and this article becomes moot. So we are ignoring that for now).

Furthermore, because of the confusion around weak AI, human-level AI, strong AI/Superintelligence and what have you, I decided I would boil down for the public the central debate to its core: hey, theres a tech out there, it might make us immortal, but it might kill us. What do you think? This is, after all, the core dilemma. The nut. The part of the problem that most calls for the publics input.

So, in the end, were right back to where we started from:

Now, in the question above, Im making up the 1 and 5 probability numbers. It might be one in 100. It might be one in two. We just dont know. NO ONE KNOWS. Remember this. Many, many people will try and convince you that they know. All they are doing is arguing their viewpoint. They dont really know. No one can predict the future. Again, remember this.

We are not arguing over whether or not this will happen in this essay. We are accepting the consensus of experts that it could happen. And we urge consideration of the fact that the actual likelihood it will happen is currently unknown.

This is also not the forum to discuss how we could ever even know the liklehood of an event in the future. Forecasting the future is, of course, an inexact science. Well never really know, for sure, the likelihood of a future event. There are numerous forecasting methodologies out there that scientists and decision-markers use. I make no opinion here. With regard to superintelligence, the Wait but Why essay does a good job going over some of the methods weve utilized in the past, such as polling scientists at conferences).

Ive been aware of the potential of this issue for decades. But like you, I thought it was far off. Not my generations problem. AI researchlike many areas of research my sci-fi inner child lovedhad been stalled for the last 3050 years. We had little progress in space exploration, self driving cars, solar power, virtual reality, electric cars, car planes, etc. Like these other areas, AI research seemed on pause. I suspect that was partially because of the brain drain caused by building the Internet, and partially because some problems proved more difficult than expected.

Yet, much like each of these fields, AI research has exploded in the last fiveten years. The field is back, and back with a vengeance.

Up to now, AI policy has been defined almost exclusively by AI researchers, policy wonks, and tech company executives. Even our own government has been, by and large, absent from the conversation. I asked one friend knowledgeable about the executive branchs handle on the situation and he said, in effect, that theyre not unaware, but they have more pressing matters.

A massive amount of AI research is being done, and most of humanity has no idea how far along we are on the journey. To be fair, the researchers involved often have some good reasons for why they are not shouting their research from the rooftops. They dont want to cause unnecessary alarm. They worry about the clamping down on their ability to publish what they do publish. The fact remains, that the public is, by and large, being left in the dark.

I believe that when facing a decision that affects the entirety of humanity at a fundamental levelnot just life or death but the very notion of existencewe all should be involved in the decision.

This is, admittedly, democratic. Many people believe in democracy in only a limited manner. They fret over the will of the masses, direct democracy, making decisions in the heat of the moment. This is all valid. Reasonable people can have a debate about these nuances. I do not seek to hash them all out here. Im not saying we need a worldwide vote.

I am, saying, however, that all of humanity should have a say in the pursuit of breakthroughs that put its very existence at risk. The will of the people should be our guide. And the better informed they are, the better decisions they will make.

There is a distinction between votes and polling. Polling guides policy, and voting, in its ideal form, affects behavior. A congresswoman may be in office because, say, 22% of all non-felon adults in her district put her there. She may then govern by listening to the will of the people as a whole through polls. Something similar should be applied here.

If this were classical economics, and humans were what John Stuart Mill dubbed homo economicusor perfectly rational beings, with all the relevant knowledge at handhumanity could simply calculate the risk potential and likelihood and measure that against the likelihood of potential benefits. We would then come up with a decision. Reality is more complex. First, the potential downside and upside are both, essentially, infinite in economic terms, thus throwing this equation out of whack. And secondly, of course, we do not actually know the likelihood that SAI will lead to humanitys destruction. Its a safe guess that that number exists, but we dont know it.

Luckily our very faultsthat we are not homo economicusalso leads to our strength in this situation: we can deal with fuzzy numbers and the notion of infinity. Our brains contain multitudes, to borrow from Walt Whitman.

What, then, is the level of acceptable risk that will cause humanity to, at least by consensus, accept our pursuit of superintelligence?

It came as a shock to me, then, that the population at large hasnt really been polled about its views on the potential of a super intelligence apocalypse. There are several polls about artificial intelligence (this one by the British Science Association is a good example), but not so many about the existential risk potentially inherent in pursuing superintelligence. Those that exist are generally in the same mold as this one by 60 Minutes, inquiring about its audiences favorite AI movies, and where one would hide from the robot insurrection. It also helpfully asks if one should fear the robots killing us more than ourselves. One could argue that this is a leading question, and in any case, its hardly useful for the development of public policy. Searching Google for superintelligence polling yields little other than polling of experts, and searching for superintelligence public opinion yields virtually nothing.

On the academic front, this December 2016 paper by Stanfords Ethan Fast and Microsofts Eric Hovitz does a superb job surveying the landscape, relying primarily on press mentions and press tone, while acknowledging that the polling is light, and not specifically focused on superintelligence. Nonetheless, it is a fascinating read.

All in all, though, data around the existential risk mankind may face with the onset of superintelligence, and Americans views on it, is sparse indeed.

So I set out to do it myself.

You can view my entire dataset here.

First, I set out to ask some top level questions about superintelligence research. Now, I confess, I am not a pollster. I know these questions are sort of leading. I did my best to keep them neutral, but Ive got my own biases. Nonetheless, it seemed worthwhile to just go ahead and ask a bunch of Americans what they think about the risks and potentials of superintelligence.

We asked four top-level questions regarding superintelligence research of 400 individuals:

At a top level, Americans seem to find the prospect of superintelligence and its benefits exciting, though it is not a ringing endorsement. Some 47% of Americans characterized themselves as excited on some level.

Again, I caution that this data is limited. Furthermore, I am not a statistics expert, so I cant say (for example) the margin of error when you poll a lot of people but at across many income levels and then analyze the subsets by income, but I suspect that its not as high as the base poll.

It would be awesome if someone started polling about this stuff. This is just one snapshot. Polls are more accurate over time.

And it would be amazing if people started polling other countries. Originally when I planned this research, I wanted to poll across countries, but Surveymonkey didnt have such functionality. Since I started in January, theyve begun offering some international polling. I hope someone gets on that. I am tapped out.

It would be great if people ran these polls at larger numbers, with better margins of error. Especially the poll of black Americans. Other subgroups, tooSurveymonkey doesnt offer much when it comes to Asian Americans, Hispanics and other minority groups.

So. What does all this mean? After all, its not like god will come down from on high and say Hey Americans! right now you have an 80% likelihood of not dying if you give this superintelligence thing a go! We will never, really, know the likelihood. But what this does tell us is that Americans are relatively risk averse in this regard (though the math is a bit wonky when we are dealing with infinite risk and infinite reward). This is not surprising. Modern behavioral economic research has shown that humans value what they have over what they might gain in the future.

We also see from the dataset that Americans are more skeptical of institutions pursuing superintelligence research on their own. I suspect if Americans knew the true extent of whats being done on this front, these trust numbers would continue to decline, but thats just a hunch. In any case, this data could be useful in institutions debating how and when to disclose their superintelligence research to the publicthere may some ticking time bombs surrounding the goodwill line item on some of these companies balance sheets.

America can perhaps be best characterized as excited about the prospect of a superintelligence explosion, but also deeply afraid, skeptical, and adamantly opposed to the idea that we should plow forth without any regulation or plan. This is, it seems to me, exactly what is happening right now.

Whatever your interpretation, its my hope that this can help spawn some efforts by policymakers, researchers, corporations and academic institutions to gauge the will of the people regarding the research they are supporting or undertaking. I conclude with a quote from Robert Oppenheimer, one of the inventors of the atomic bomb: When you see something that is technically sweet, you go ahead and do it, and you argue about what to do about it only after you have had your technical success. That is the way it was with the atomic bomb.

I pulled the Oppenheimer quote from a recent New Yorker article about CRISPR DNA editing and the scientist Kevin Esvelts efforts to bring the research into the open. We really need to think about the world we are entering. He says elsewhere, To an appalling degree, not that much has changed. Scientists still really dont care very much about what others think of their work.

Ill save my personal interpretation of the data for another essay. Ive tried to keep editorializing to a minimum. This is not to say that I havent formed opinions when looking at this data. I hope you do too.

Read this article:

Superintelligence and Public Opinion - NewCo Shift

Related Posts