Note: This is Part 2 of a two-part series on AI. Part 1 is here.
PDF: We made a fancy PDF of this post for printing and offline viewing. Buy it here. (Or see a preview.)
___________
We have what may be an extremely difficult problem with an unknown time to solve it, on which quite possibly the entire future of humanity depends. Nick Bostrom
Welcome to Part 2 of the Wait how is this possibly what Im reading I dont get why everyone isnt talking about this series.
Part 1 started innocently enough, as we discussed Artificial Narrow Intelligence, or ANI (AI that specializes in one narrow task like coming up with driving routes or playing chess), and how its all around us in the world today. We then examined why it was such a huge challenge to get from ANI to Artificial General Intelligence, or AGI (AI thats at least as intellectually capable as a human, across the board), and we discussed why the exponential rate of technological advancement weve seen in the past suggests that AGI might not be as far away as it seems. Part 1 ended with me assaulting you with the fact that once our machines reach human-level intelligence, they might immediately do this:
This left us staring at the screen, confronting the intense concept of potentially-in-our-lifetime Artificial Superintelligence, or ASI (AI thats way smarter than any human, across the board), and trying to figure out which emotion we were supposed to have on as we thought about that.11 open these
Before we dive into things, lets remind ourselves what it would mean for a machine to be superintelligent.
A key distinction is the difference between speed superintelligence and quality superintelligence. Often, someones first thought when they imagine a super-smart computer is one thats as intelligent as a human but can think much, much faster2they might picture a machine that thinks like a human, except a million times quicker, which means it could figure out in five minutes what would take a human a decade.
That sounds impressive, and ASI would think much faster than any human couldbut the true separator would be its advantage in intelligence quality, which is something completely different. What makes humans so much more intellectually capable than chimps isnt a difference in thinking speedits that human brains contain a number of sophisticated cognitive modules that enable things like complex linguistic representations or longterm planning or abstract reasoning, that chimps brains do not. Speeding up a chimps brain by thousands of times wouldnt bring him to our leveleven with a decades time, he wouldnt be able to figure out how to use a set of custom tools to assemble an intricate model, something a human could knock out in a few hours. There are worlds of human cognitive function a chimp will simply never be capable of, no matter how much time he spends trying.
But its not just that a chimp cant do what we do, its that his brain is unable to grasp that those worlds even exista chimp can become familiar with what a human is and what a skyscraper is, but hell never be able to understand that the skyscraper was built by humans. In his world, anything that huge is part of nature, period, and not only is it beyond him to build a skyscraper, its beyond him to realize that anyone can build a skyscraper. Thats the result of a small difference in intelligence quality.
And in the scheme of the intelligence range were talking about today, or even the much smaller range among biological creatures, the chimp-to-human quality intelligence gap is tiny. In an earlier post, I depicted the range of biological cognitive capacity using a staircase:3
To absorb how big a deal a superintelligent machine would be, imagine one on the dark green step two steps above humans on that staircase. This machine would be only slightly superintelligent, but its increased cognitive ability over us would be as vast as the chimp-human gap we just described. And like the chimps incapacity to ever absorb that skyscrapers can be built, we will never be able to even comprehend the things a machine on the dark green step can do, even if the machine tried to explain it to uslet alone do it ourselves. And thats only two steps above us. A machine on the second-to-highest step on that staircase would be to us as we are to antsit could try for years to teach us the simplest inkling of what it knows and the endeavor would be hopeless.
But the kind of superintelligence were talking about today is something far beyond anything on this staircase. In an intelligence explosionwhere the smarter a machine gets, the quicker its able to increase its own intelligence, until it begins to soar upwardsa machine might take years to rise from the chimp step to the one above it, but perhaps only hours to jump up a step once its on the dark green step two above us, and by the time its ten steps above us, it might be jumping up in four-step leaps every second that goes by. Which is why we need to realize that its distinctly possible that very shortly after the big news story about the first machine reaching human-level AGI, we might be facing the reality of coexisting on the Earth with something thats here on the staircase (or maybe a million times higher):
And since we just established that its a hopeless activity to try to understand the power of a machine only two steps above us, lets very concretely state once and for all that there is no way to know what ASI will do or what the consequences will be for us.Anyone who pretends otherwise doesnt understand what superintelligence means.
Evolution has advanced the biological brain slowly and gradually over hundreds of millions of years, and in that sense, if humans birth an ASI machine, well be dramatically stomping on evolution. Or maybe this is part of evolutionmaybe the way evolution works is that intelligence creeps up more and more until it hits the level where its capable of creating machine superintelligence, and that level is like a tripwire that triggers a worldwide game-changing explosion that determines a new future for all living things:
And for reasons well discuss later, a huge part of the scientific community believes that its not a matter of whether well hit that tripwire, but when. Kind of a crazy piece of information.
So where does that leave us?
Well no one in the world, especially not I, can tell you what will happen when we hit the tripwire. But Oxford philosopher and lead AI thinker Nick Bostrom believes we can boil down all potential outcomes into two broad categories.
First, looking at history, we can see that life works like this: species pop up, exist for a while, and after some time, inevitably, they fall off the existence balance beam and land on extinction
All species eventually go extinct has been almost as reliable a rule through history as All humans eventually die has been. So far, 99.9% of species have fallen off the balance beam, and it seems pretty clear that if a species keeps wobbling along down the beam, its only a matter of time before some other species, some gust of natures wind, or a sudden beam-shaking asteroid knocks it off. Bostrom calls extinction an attractor statea place species are all teetering on falling into and from which no species ever returns.
And while most scientists Ive come across acknowledge that ASI would have the ability to send humans to extinction, many also believe that used beneficially, ASIs abilities could be used to bring individual humans, and the species as a whole, to a second attractor statespecies immortality. Bostrom believes species immortality is just as much of an attractor state as species extinction, i.e. if we manage to get there, well be impervious to extinction foreverwell have conquered mortality and conquered chance. So even though all species so far have fallen off the balance beam and landed on extinction, Bostrom believes there are two sides to the beam and its just that nothing on Earth has been intelligent enough yet to figure out how to fall off on the other side.
If Bostrom and others are right, and from everything Ive read, it seems like they really might be, we have two pretty shocking facts to absorb:
1) The advent of ASI will, for the first time, open up the possibility for a species to land on the immortality side of the balance beam.
2) The advent of ASI will make such an unimaginably dramatic impact that its likely to knock the human race off the beam, in one direction or the other.
It may very well be that when evolution hits the tripwire, it permanently ends humans relationship with the beam and creates a new world, with or without humans.
Kind of seems like the only question any human should currently be asking is: When are we going to hit the tripwire and which side of the beam will we land on when that happens?
No one in the world knows the answer to either part of that question, but a lot of the very smartest people have put decades of thought into it. Well spend the rest of this post exploring what theyve come up with.
___________
Lets start with the first part of the question: When are we going to hit the tripwire?
i.e. How long until the first machine reaches superintelligence?
Not shockingly, opinions vary wildly and this is a heated debate among scientists and thinkers. Many, like professor Vernor Vinge, scientist Ben Goertzel, Sun Microsystems co-founder Bill Joy, or, most famously, inventor and futurist Ray Kurzweil, agree with machine learning expert Jeremy Howard when he puts up this graph during a TED Talk:
Those people subscribe to the belief that this is happening soonthat exponential growth is at work and machine learning, though only slowly creeping up on us now, will blow right past us within the next few decades.
Others, like Microsoft co-founder Paul Allen, research psychologist Gary Marcus, NYU computer scientist Ernest Davis, and tech entrepreneur Mitch Kapor, believe that thinkers like Kurzweil are vastly underestimating the magnitude of the challenge and believe that were not actually that close to the tripwire.
The Kurzweil camp would counter that the only underestimating thats happening is the underappreciation of exponential growth, and theyd compare the doubters to those who looked at the slow-growing seedling of the internet in 1985 and argued that there was no way it would amount to anything impactful in the near future.
The doubters might argue back that the progress needed to make advancements in intelligence also grows exponentially harder with each subsequent step, which will cancel out the typical exponential nature of technological progress. And so on.
A third camp, which includes Nick Bostrom, believes neither group has any ground to feel certain about the timeline and acknowledges both A) that this could absolutely happen in the near future and B) that theres no guarantee about that; it could also take a much longer time.
Still others, like philosopher Hubert Dreyfus, believe all three of these groups are naive for believing that there even is a tripwire, arguing that its more likely that ASI wont actually ever be achieved.
So what do you get when you put all of these opinions together?
In 2013, Vincent C. Mller and Nick Bostrom conducted a survey that asked hundreds of AI experts at a series of conferences the following question: For the purposes of this question, assume that human scientific activity continues without major negative disruption. By what year would you see a (10% / 50% / 90%) probability for such HLMI4 to exist? Itasked them to name an optimistic year (one in which they believe theres a 10% chance well have AGI), a realistic guess (a year they believe theres a 50% chance of AGIi.e. after that year they think its more likely than not that well have AGI), and a safe guess (the earliest year by which they can say with 90% certainty well have AGI). Gathered together as one data set, here were the results:2
Median optimistic year (10% likelihood): 2022Median realistic year (50% likelihood): 2040Median pessimistic year (90% likelihood): 2075
So the median participant thinks its more likely than not that well have AGI 25 years from now. The 90% median answer of 2075 means that if youre a teenager right now, the median respondent, along with over half of the group of AI experts, is almost certain AGI will happen within your lifetime.
A separate study, conducted recently by author James Barrat at Ben Goertzels annual AGI Conference, did away with percentages and simply asked when participants thought AGI would be achievedby 2030, by 2050, by 2100, after 2100, or never. The results:3
By 2030: 42% of respondentsBy 2050: 25% By 2100: 20%After 2100: 10% Never: 2%
Pretty similar to Mller and Bostroms outcomes. In Barrats survey, over two thirds of participants believe AGI will be here by 2050 and a little less than half predict AGI within the next 15 years. Also striking is that only 2% of those surveyed dont think AGI is part of our future.
But AGI isnt the tripwire, ASI is. So when do the experts think well reach ASI?
Mller and Bostrom also asked the experts how likely they think it is that well reach ASI A) within two years of reaching AGI (i.e. an almost-immediate intelligence explosion), and B) within 30 years. The results:4
The median answer put a rapid (2 year) AGI ASI transition at only a 10% likelihood, but a longer transition of 30 years or less at a 75% likelihood.
We dont know from this data the length of this transition the median participant would have put at a 50% likelihood, but for ballpark purposes, based on the two answers above, lets estimate that theyd have said 20 years. So the median opinionthe one right in the center of the world of AI expertsbelieves the most realistic guess for when well hit the ASI tripwire is [the 2040 prediction for AGI + our estimated prediction of a 20-year transition from AGI to ASI] = 2060.
Of course, all of the above statistics are speculative, and theyre only representative of the center opinion of the AI expert community, but it tells us that a large portion of the people who know the most about this topic would agree that 2060 is a very reasonable estimate for the arrival of potentially world-altering ASI. Only 45 years from now.
Okay now how about the second part of the question above: When we hit the tripwire, which side of the beam will we fall to?
Superintelligence will yield tremendous powerthe critical question for us is:
Who or what will be in control of that power, and what will their motivation be?
The answer to this will determine whether ASI is an unbelievably great development, an unfathomably terrible development, or something in between.
Of course, the expert community is again all over the board and in a heated debate about the answer to this question. Mller and Bostroms survey asked participants to assign a probability to the possible impacts AGI would have on humanity and found that the mean response was that there was a 52% chance that the outcome will be either good or extremely good and a 31% chance the outcome will be either bad or extremely bad. For a relatively neutral outcome, the mean probability was only 17%. In other words, the people who know the most about this are pretty sure this will be a huge deal. Its also worth noting that those numbers refer to the advent of AGIif the question were about ASI, I imagine that the neutral percentage would be even lower.
Before we dive much further into this good vs. bad outcome part of the question, lets combine both the when will it happen? and the will it be good or bad? parts of this question into a chart that encompasses the views of most of the relevant experts:
Well talk more about the Main Camp in a minute, but firstwhats your deal? Actually I know what your deal is, because it was my deal too before I started researching this topic. Some reasons most people arent really thinking about this topic:
One of the goals of these two posts is to get you out of the I Like to Think About Other Things Camp and into one of the expert camps, even if youre just standing on the intersection of the two dotted lines in the square above, totally uncertain.
During my research, I came across dozens of varying opinions on this topic, but I quickly noticed that most peoples opinions fell somewhere in what I labeled the Main Camp, and in particular, over three quarters of the experts fell into two Subcamps inside the Main Camp:
Were gonna take a thorough dive into both of these camps. Lets start with the fun one
As I learned about the world of AI, I found a surprisingly large number of people standing here:
The people on Confident Corner are buzzing with excitement. They have their sights set on the fun side of the balance beam and theyre convinced thats where all of us are headed. For them, the future is everything they ever could have hoped for, just in time.
The thing that separates these people from the other thinkers well discuss later isnt their lust for the happy side of the beamits their confidence that thats the side were going to land on.
Where this confidence comes from is up for debate. Critics believe it comes from an excitement so blinding that they simply ignore or deny potential negative outcomes. But the believers say its naive to conjure up doomsday scenarios when on balance, technology has and will likely end up continuing to help us a lot more than it hurts us.
Well cover both sides, and you can form your own opinion about this as you read, but for this section, put your skepticism away and lets take a good hard look at whats over there on the fun side of the balance beamand try to absorb the fact that the things youre reading might really happen. If you had shown a hunter-gatherer our world of indoor comfort, technology, and endless abundance, it would have seemed like fictional magic to himwe have to be humble enough to acknowledge that its possible that an equally inconceivable transformation could be in our future.
Nick Bostrom describes three ways a superintelligent AI system could function:6
These questions and tasks, which seem complicated to us, would sound to a superintelligent system like someone asking you to improve upon the My pencil fell off the table situation, which youd do by picking it up and putting it back on the table.
Eliezer Yudkowsky, a resident of Anxious Avenue in our chart above, said it well:
There are no hard problems, only problems that are hard to a certain level of intelligence. Move the smallest bit upwards [in level of intelligence], and some problems will suddenly move from impossible to obvious. Move a substantial degree upwards, and all of them will become obvious.7
There are a lot of eager scientists, inventors, and entrepreneurs in Confident Cornerbut for a tour of brightest side of the AI horizon, theres only one person we want as our tour guide.
Ray Kurzweil is polarizing. In my reading, I heard everything from godlike worship of him and his ideas to eye-rolling contempt for them. Others were somewhere in the middleauthor Douglas Hofstadter, in discussing the ideas in Kurzweils books, eloquently put forth that it is as if you took a lot of very good food and some dog excrement and blended it all up so that you cant possibly figure out whats good or bad.8
Whether you like his ideas or not, everyone agrees that Kurzweil is impressive. He began inventing things as a teenager and in the following decades, he came up with several breakthrough inventions, including the first flatbed scanner, the first scanner that converted text to speech (allowing the blind to read standard texts), the well-known Kurzweil music synthesizer (the first true electric piano), and the first commercially marketed large-vocabulary speech recognition. Hes the author of five national bestselling books. Hes well-known for his bold predictions and has a pretty good record of having them come trueincluding his prediction in the late 80s, a time when the internet was an obscure thing, that by the early 2000s, it would become a global phenomenon. Kurzweil has been called a restless genius by The Wall Street Journal, the ultimate thinking machine by Forbes, Edisons rightful heir by Inc. Magazine, and the best person I know at predicting the future of artificial intelligence by Bill Gates.9 In 2012, Google co-founder Larry Page approached Kurzweil and asked him to be Googles Director of Engineering.5 In 2011, he co-founded Singularity University, which is hosted by NASA and sponsored partially by Google. Not bad for one life.
This biography is important. When Kurzweil articulates his vision of the future, he sounds fully like a crackpot, and the crazy thing is that hes nothes an extremely smart, knowledgeable, relevant man in the world. You may think hes wrong about the future, but hes not a fool. Knowing hes such a legit dude makes me happy, because as Ive learned about his predictions for the future, I badly want him to be right. And you do too. As you hear Kurzweils predictions, many shared by other Confident Corner thinkers like Peter Diamandis and Ben Goertzel, its not hard to see why he has such a large, passionate followingknown as the singularitarians. Heres what he thinks is going to happen:
Timeline
Kurzweil believes computers will reach AGI by 2029 and that by 2045, well have not only ASI, but a full-blown new worlda time he calls the singularity. His AI-related timeline used to be seen as outrageously overzealous, and it still is by many,6 but in the last 15 years, the rapid advances of ANI systems have brought the larger world of AI experts much closer to Kurzweils timeline. His predictions are still a bit more ambitious than the median respondent on Mller and Bostroms survey (AGI by 2040, ASI by 2060), but not by that much.
Kurzweils depiction of the 2045 singularity is brought about by three simultaneous revolutions in biotechnology, nanotechnology, and, most powerfully, AI.
Before we move onnanotechnology comes up in almost everything you read about the future of AI, so come into this blue box for a minute so we can discuss it
Nanotechnology Blue Box
Nanotechnology is our word for technology that deals with the manipulation of matter thats between 1 and 100 nanometers in size. A nanometer is a billionth of a meter, or a millionth of a millimeter, and this 1-100 range encompasses viruses (100 nm across), DNA (10 nm wide), and things as small as large molecules like hemoglobin (5 nm) and medium molecules like glucose (1 nm). If/when we conquer nanotechnology, the next step will be the ability to manipulate individual atoms, which are only one order of magnitude smaller (~.1 nm).7
To understand the challenge of humans trying to manipulate matter in that range, lets take the same thing on a larger scale. The International Space Station is 268 mi (431 km) above the Earth. If humans were giants so large their heads reached up to the ISS, theyd be about 250,000 times bigger than they are now. If you make the 1nm 100nm nanotech range 250,000 times bigger, you get .25mm 2.5cm. So nanotechnology is the equivalent of a human giant as tall as the ISS figuring out how to carefully build intricate objects using materials between the size of a grain of sand and an eyeball. To reach the next levelmanipulating individual atomsthe giant would have to carefully position objects that are 1/40th of a millimeterso small normal-size humans would need a microscope to see them.8
Nanotech was first discussed by Richard Feynman in a 1959 talk, when he explained: The principles of physics, as far as I can see, do not speak against the possibility of maneuvering things atom by atom. It would be, in principle, possible for a physicist to synthesize any chemical substance that the chemist writes down. How? Put the atoms down where the chemist says, and so you make the substance. Its as simple as that. If you can figure out how to move individual molecules or atoms around, you can make literally anything.
Nanotech became a serious field for the first time in 1986, when engineer Eric Drexler provided its foundations in his seminal book Engines of Creation, but Drexler suggests that those looking to learn about the most modern ideas in nanotechnology would be best off reading his 2013 book, Radical Abundance.
Gray Goo Bluer Box
Were now in a diversion in a diversion. This is very fun.9
Anyway, I brought you here because theres this really unfunny part of nanotechnology lore I need to tell you about. In older versions of nanotech theory, a proposed method of nanoassembly involved the creation of trillions of tiny nanobots that would work in conjunction to build something. One way to create trillions of nanobots would be to make one that could self-replicate and then let the reproduction process turn that one into two, those two then turn into four, four into eight, and in about a day, thered be a few trillion of them ready to go. Thats the power of exponential growth. Clever, right?
Its clever until it causes the grand and complete Earthwide apocalypse by accident. The issue is that the same power of exponential growth that makes it super convenient to quickly create a trillion nanobots makes self-replication a terrifying prospect. Because what if the system glitches, and instead of stopping replication once the total hits a few trillion as expected, they just keep replicating? The nanobots would be designed to consume any carbon-based material in order to feed the replication process, and unpleasantly, all life is carbon-based. The Earths biomass contains about 1045 carbon atoms. A nanobot would consist of about 106 carbon atoms, so 1039 nanobots would consume all life on Earth, which would happen in 130 replications (2130 is about 1039), as oceans of nanobots (thats the gray goo) rolled around the planet. Scientists think a nanobot could replicate in about 100 seconds, meaning this simple mistake would inconveniently end all life on Earth in 3.5 hours.
Read the original here:
The Artificial Intelligence Revolution: Part 2 - Wait But Why
- Ethical Issues In Advanced Artificial Intelligence [Last Updated On: December 14th, 2016] [Originally Added On: December 14th, 2016]
- How Long Before Superintelligence? - Nick Bostrom [Last Updated On: December 21st, 2016] [Originally Added On: December 21st, 2016]
- Parallel universes, the Matrix, and superintelligence ... [Last Updated On: January 4th, 2017] [Originally Added On: January 4th, 2017]
- Superintelligence - Nick Bostrom - Oxford University Press [Last Updated On: January 27th, 2017] [Originally Added On: January 27th, 2017]
- Experts have come up with 23 guidelines to avoid an AI apocalypse ... - ScienceAlert [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- Elon Musk's Surprising Reason Why Everyone Will Be Equal in the ... - Big Think [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- Stephen Hawking and Elon Musk Endorse 23 Asilomar Principles ... - Inverse [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- SoftBank's Fantastical Future Still Rooted in the Now - Wall Street Journal [Last Updated On: February 8th, 2017] [Originally Added On: February 8th, 2017]
- The Moment When Humans Lose Control Of AI - Vocativ [Last Updated On: February 8th, 2017] [Originally Added On: February 8th, 2017]
- Game Theory: Google tests AIs to see whether they'll fight or work together - Neowin [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Simulation hypothesis: The smart person's guide - TechRepublic [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Elon Musk jokes 'I'm not an alien' while discussing how to contact extraterrestrials - Yahoo News [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- Another Expert Joins Stephen Hawking and Elon Musk in Warning About the Dangers of AI - Futurism [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- Artificial intelligence predictions surpass reality - UT The Daily Texan [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- Elon Musk - 2 Things Humans Need to Do to Have a Good Future - Big Think [Last Updated On: February 27th, 2017] [Originally Added On: February 27th, 2017]
- Don't Fear Superintelligent AICCT News - CCT News [Last Updated On: February 27th, 2017] [Originally Added On: February 27th, 2017]
- Superintelligent AI explains Softbank's push to raise a $100BN ... - TechCrunch [Last Updated On: February 28th, 2017] [Originally Added On: February 28th, 2017]
- Building A 'Collective Superintelligence' For Doctors And Patients Around The World - Forbes [Last Updated On: February 28th, 2017] [Originally Added On: February 28th, 2017]
- Tech Leaders Raise Concern About the Dangers of AI - iDrop News [Last Updated On: March 2nd, 2017] [Originally Added On: March 2nd, 2017]
- Softbank CEO: The Singularity Will Happen by 2047 - Futurism [Last Updated On: March 2nd, 2017] [Originally Added On: March 2nd, 2017]
- Disruptive by Design: Siri, Tell Me a Joke. No, Not That One. - Signal Magazine [Last Updated On: March 2nd, 2017] [Originally Added On: March 2nd, 2017]
- Horst Simon to Present Supercomputers and Superintelligence at PASC17 in Lugano - insideHPC [Last Updated On: March 3rd, 2017] [Originally Added On: March 3rd, 2017]
- Supersentience [Last Updated On: March 7th, 2017] [Originally Added On: March 7th, 2017]
- Superintelligence | Guardian Bookshop [Last Updated On: March 7th, 2017] [Originally Added On: March 7th, 2017]
- The AI debate must stay grounded in reality - Prospect [Last Updated On: March 8th, 2017] [Originally Added On: March 8th, 2017]
- Who is afraid of AI? - The Hindu [Last Updated On: April 8th, 2017] [Originally Added On: April 8th, 2017]
- The Nonparametric Intuition: Superintelligence and Design Methodology - Lifeboat Foundation (blog) [Last Updated On: April 8th, 2017] [Originally Added On: April 8th, 2017]
- How humans will lose control of artificial intelligence - The Week Magazine [Last Updated On: April 8th, 2017] [Originally Added On: April 8th, 2017]
- The AI Revolution: The Road to Superintelligence (PDF) [Last Updated On: June 6th, 2017] [Originally Added On: June 6th, 2017]
- A reply to Wait But Why on machine superintelligence [Last Updated On: June 6th, 2017] [Originally Added On: June 6th, 2017]
- Are You Ready for the AI Revolution and the Rise of Superintelligence? - TrendinTech [Last Updated On: June 7th, 2017] [Originally Added On: June 7th, 2017]
- Using AI to unlock human potential - EJ Insight [Last Updated On: June 9th, 2017] [Originally Added On: June 9th, 2017]
- Cars 3 gets back to what made the franchise adequate - Vox [Last Updated On: June 13th, 2017] [Originally Added On: June 13th, 2017]
- Facebook Chatbots Spontaneously Invent Their Own Non-Human ... - Interesting Engineering [Last Updated On: June 17th, 2017] [Originally Added On: June 17th, 2017]
- U.S. Navy reaches out to gamers to troubleshoot post ... [Last Updated On: June 18th, 2017] [Originally Added On: June 18th, 2017]
- The bots are coming - The New Indian Express [Last Updated On: June 20th, 2017] [Originally Added On: June 20th, 2017]
- Effective Altruism Says You Can Save the Future by Making Money - Motherboard [Last Updated On: June 20th, 2017] [Originally Added On: June 20th, 2017]
- No need to fear Artificial Intelligence - Livemint - Livemint [Last Updated On: June 30th, 2017] [Originally Added On: June 30th, 2017]
- Integrating disciplines 'key to dealing with digital revolution' | Times ... - Times Higher Education (THE) [Last Updated On: July 4th, 2017] [Originally Added On: July 4th, 2017]
- What an artificial intelligence researcher fears about AI - Huron Daily ... - Huron Daily Tribune [Last Updated On: July 16th, 2017] [Originally Added On: July 16th, 2017]
- AI researcher: Why should a superintelligence keep us around? - TNW [Last Updated On: July 18th, 2017] [Originally Added On: July 18th, 2017]
- Giving Up the Fags: A Self-Reflexive Speech on Critical Auto-ethnography About the Shame of Growing up Gay/Sexual ... - The Good Men Project (blog) [Last Updated On: July 22nd, 2017] [Originally Added On: July 22nd, 2017]
- Will we be wiped out by machine overlords? Maybe we need a ... - PBS NewsHour [Last Updated On: July 22nd, 2017] [Originally Added On: July 22nd, 2017]
- The end of humanity as we know it is 'coming in 2045' and Google is preparing for it - Metro [Last Updated On: July 27th, 2017] [Originally Added On: July 27th, 2017]
- The Musk/Zuckerberg Dustup Represents a Growing Schism in AI - Motherboard [Last Updated On: August 4th, 2017] [Originally Added On: August 4th, 2017]
- Infographic: Visualizing the Massive $15.7 Trillion Impact of AI - Visual Capitalist (blog) [Last Updated On: August 25th, 2017] [Originally Added On: August 25th, 2017]
- Being human in the age of artificial intelligence - Science Weekly podcast - The Guardian [Last Updated On: August 25th, 2017] [Originally Added On: August 25th, 2017]
- Why won't everyone listen to Elon Musk about the robot apocalypse? - Ladders [Last Updated On: August 25th, 2017] [Originally Added On: August 25th, 2017]
- Friendly artificial intelligence - Wikipedia [Last Updated On: August 25th, 2017] [Originally Added On: August 25th, 2017]
- The AI Revolution: The Road to Superintelligence | Inverse [Last Updated On: August 25th, 2017] [Originally Added On: August 25th, 2017]
- Steam Workshop :: Superintelligence [Last Updated On: February 15th, 2018] [Originally Added On: February 15th, 2018]
- Steam Workshop :: Superintelligence (BNW) [Last Updated On: February 15th, 2018] [Originally Added On: February 15th, 2018]
- Artificial Superintelligence: The Coming Revolution ... [Last Updated On: June 3rd, 2018] [Originally Added On: June 3rd, 2018]
- Superintelligence survey - Future of Life Institute [Last Updated On: June 23rd, 2018] [Originally Added On: June 23rd, 2018]
- Amazon.com: Superintelligence: Paths, Dangers, Strategies ... [Last Updated On: August 18th, 2018] [Originally Added On: August 18th, 2018]
- Superintelligence: From Chapter Eight of Films from the ... [Last Updated On: October 11th, 2018] [Originally Added On: October 11th, 2018]
- Superintelligence - Hardcover - Nick Bostrom - Oxford ... [Last Updated On: March 6th, 2019] [Originally Added On: March 6th, 2019]
- Global Risks Report 2017 - Reports - World Economic Forum [Last Updated On: March 6th, 2019] [Originally Added On: March 6th, 2019]
- Superintelligence: Paths, Dangers, Strategies - Wikipedia [Last Updated On: May 3rd, 2019] [Originally Added On: May 3rd, 2019]
- Elon Musk warns 'advanced A.I.' will soon manipulate social media - Big Think [Last Updated On: October 1st, 2019] [Originally Added On: October 1st, 2019]
- Aquinas' Fifth Way: The Proof from Specification - Discovery Institute [Last Updated On: October 22nd, 2019] [Originally Added On: October 22nd, 2019]
- The Best Artificial Intelligence Books you Need to Read Today - Edgy Labs [Last Updated On: October 22nd, 2019] [Originally Added On: October 22nd, 2019]
- Here's How to Watch Watchmen, HBOs Next Game of Thrones - Cosmopolitan [Last Updated On: October 24th, 2019] [Originally Added On: October 24th, 2019]
- Idiot Box: HBO Max joins the flood of streaming services - Weekly Alibi [Last Updated On: October 24th, 2019] [Originally Added On: October 24th, 2019]
- AMC Is Still In the Theater Business, But VOD Is a Funny Way of Showing It - IndieWire [Last Updated On: October 24th, 2019] [Originally Added On: October 24th, 2019]
- Melissa McCarthy And Ben Falcone Have Decided To Release 'Superintelligence' Via HBO Max Ins - Science Fiction [Last Updated On: October 24th, 2019] [Originally Added On: October 24th, 2019]
- Melissa McCarthy & Director Ben Falcone On Choosing HBO Max Bow Instead Of WB Xmas Release For Superintelligence - Deadline [Last Updated On: October 24th, 2019] [Originally Added On: October 24th, 2019]
- Doubting The AI Mystics: Dramatic Predictions About AI Obscure Its Concrete Benefits - Forbes [Last Updated On: December 5th, 2019] [Originally Added On: December 5th, 2019]
- AI R&D is booming, but general intelligence is still out of reach - The Verge [Last Updated On: December 18th, 2019] [Originally Added On: December 18th, 2019]
- Playing Tetris Shows That True AI Is Impossible - Walter Bradley Center for Natural and Artificial Intelligence [Last Updated On: December 21st, 2019] [Originally Added On: December 21st, 2019]
- Liquid metal tendons could give robots the ability to heal themselves - Digital Trends [Last Updated On: December 21st, 2019] [Originally Added On: December 21st, 2019]
- NIU expert: 4 leaps in technology to expect in the 2020s | NIU - NIU Newsroom [Last Updated On: December 21st, 2019] [Originally Added On: December 21st, 2019]
- Thinking Beyond Flesh and Bones with AI - Ghana Latest Football News, Live Scores, Results - Ghanasoccernet.com [Last Updated On: February 18th, 2020] [Originally Added On: February 18th, 2020]
- Elon Musk dings Bill Gates and says their conversations were underwhelming, after the Microsoft billionaire buys an electric Porsche - Pulse Nigeria [Last Updated On: February 18th, 2020] [Originally Added On: February 18th, 2020]
- Is Artificial Intelligence (AI) A Threat To Humans? - Forbes [Last Updated On: March 4th, 2020] [Originally Added On: March 4th, 2020]
- The world's best virology lab isn't where you think - Spectator.co.uk [Last Updated On: April 3rd, 2020] [Originally Added On: April 3rd, 2020]
- Josiah Henson: the forgotten story in the history of slavery - The Guardian [Last Updated On: June 21st, 2020] [Originally Added On: June 21st, 2020]
- The Shadow of Progress - Merion West [Last Updated On: July 13th, 2020] [Originally Added On: July 13th, 2020]
- If you can't beat 'em, join 'em Elon Musk tweets out the mission statement for his AI-brain-chip Neuralink - Business Insider India [Last Updated On: July 13th, 2020] [Originally Added On: July 13th, 2020]
- Consciousness Existing Beyond Matter, Or in the Central Nervous System as an Afterthought of Nature? - The Daily Galaxy --Great Discoveries Channel [Last Updated On: July 17th, 2020] [Originally Added On: July 17th, 2020]