Update: readers of the post have also pointed out this critique by Ernest Davis and this response to Davis by Rob Bensinger.
Update 2: Both Rob Bensinger and Michael Tetelman rightly pointed out that my intelligence definition was sloppily defined. Ive added a clarification that the defintion is for a given task.
Cover of Superintelligence
This post is a discussion of Nick Bostroms book Superintelligence. The book has had an effect on the thinking of many of the worlds thought leaders. Not just in artificial intelligence, but in a range of different domains (politicians, physicists, business leaders). In that light, and given this series of blog posts is about the Future of AI, it seemed important to read the book and discuss his ideas.
In an ideal world, this post would certainly have contained more summaries of the books arguments and perhaps a later update will improve on that aspect. For the moment the review focuses on counter-arguments and perceived omissions (the post already got too long with just covering those).
Bostrom considers various routes we have to forming intelligent machines and what the possible outcomes might be from developing such technologies. He is a professor of philosophy but has an impressive array of background degrees in areas such as mathematics, logic, philosophy and computational neuroscience.
So lets start at the beginning and put the book in context by trying to understand what is meant by the term superintelligence
In common with many contributions to the debate on artificial intelligence, Bostrom never defines what he means by intelligence. Obviously, this can be problematic. On the other hand, superintelligence is defined as outperforming humans in every intelligent capability that they express.
Personally, Ive developed the following definition of intelligence: Use of information to take decisions which save energy in pursuit of a given task. Here by information I might mean data or facts or rules, and by saving energy I mean saving free energy.
However, accepting Bostroms lack of definition of intelligence (and perhaps taking note of my own), we can still consider the routes to superintelligence Bostrom proposes. It is important to bear in mind that Bostrom is worried about the effect of intelligence on 30 year (and greater) timescales. These are timescales which are difficult to predict over. I think it is admirable that Nick is trying to address this, but Im also keen to ensure that particular ideas which are at best implausible, but at worst a misrepresentation of current research, dont become memes in the very important debate on the future of machine intelligence.
A technological singularity is when a technology becomes transhuman in its possibilities, moving beyond our own capabilities through self improvement. Its a simple idea, and often theres nothing to be afraid of. For example, in mechanical engineering, we long ago began to make tools that could manufacture other tools. And indeed, the precision of the manufactured tools outperformed those that we could make by hand. This led to a technological singularity of precision made tools. We developed transhuman milling machines and lathes. We developed superprecision, precision that is beyond the capabilities of any human. Of course there are physical limits on how far this particular technological singularity has taken us. We cannot achieve infinitely precise machining tolerances.
In machining, the concept of precision can be defined in terms of the tolerance that the resulting parts are made to. Unfortunately, the lack of a definition of intelligence in Bostroms book makes it harder to ground the argument. In practice this means that the book often exploits different facets of intelligence and combines them in worse case scenarios while simultaneously conflating conflicting principles.
The book gives little thought to the differing natures of machine and human intelligence. For example, there is no acknowledgment of the embodied nature of our intelligence. There are physical constraints on communication rates. For humans these constraints are much stronger than for machines. Machine intelligences communicate with one another in gigabits per second. Humans in bits per second. For our relative computational abilities the best estimates are that, in terms of underlying computation in the brain, we are computing much quicker than machines. This means humans have a very high compute/communicate ratio. We might think of that as an embodiment factor. We can compute far more than we can communicate, leading to a backlog of conclusions within our own minds. Much of our human intelligence seems doomed to remain within ourselves. This dominates the nature of human intelligence. In contrast, this phenomenon is only weakly observed in computers, if at all. Computers can distribute the results of their intelligence at approximately the same rate that they compute them.
Bostroms idea of superintelligence is an intelligence that outperforms us in all its facets. But if our emotional intelligence is a result of our limited communication ability, then it might be impossible to emulate it without also implementing the limited communication. Since communication also affects other facets of our intelligence we can see how it may, therefore, be impossible to dominate human abilities in the manner which the concept of superintelligence envisages. A better definition of intelligence would have helped resolve these arguments.
My own belief is that we became individually intelligent through a need to model each other (and ourselves) to perform better planning. So we evolved to undertake collaborative planning and developed complex social interactions. As a result our species, our collective intelligence, became increasingly complex (on evolutionary timescales) as we evolved greater intelligence within each of the individuals that made up our social group. Because of this process I find it difficult to fully separate our collective intelligence from our individual intelligences. I dont think Bostrom suffers with this dichotomy because my impression is that his book only views human intelligence as an individual characteristic. My feeling is that this is limiting because any algorithmics we create to emulate our intelligence will actually operate on societal scales and the interaction of the artificial intelligence with our own should be considered in that context.
As humans, we are a complex society of interacting intelligences. Any predictions we make within that society would seem particularly fraught. Intelligent decision making relies on such predictions to quantify the value of a particular decision (in terms of the energy it might save). But when we want to consider future plausible scenarios we are faced with exponential growth of complexity in an already extremely complex system.
In practice we can make progress with our predictions by compressing the complex world into abstractions: simplifications of the world around that are sufficiently predictive for our purposes, but retain tractability. However, using such abstractions involves introducing model uncertainty. Model uncertainty reflects the unknown way in which the actual world will differ from our simplifications.
Practitioners who have performed sensitivity analysis on time series prediction will know how quickly uncertainty accumulates as you try to look forward in time. There is normally a time frame ahead of which things become too misty to compute any more. Further computational power doesnt help you in this instance, because uncertainty dominates. Reducing model uncertainty requires exponentially greater computation. We might try to handle this uncertainty by quantifying it, but even this can prove intractable.
So just like the elusive concept of infinite precision in mechanical machining, there is likely a limit on the degree to which an entity can be intelligent. We cannot predict with infinite precision and this will render our predictions useless on some particular time horizon.
The limit on predictive precision is imposed by the exponential growth in complexity of exact simulation, coupled with the accumulation of error associated with the necessary abstraction of our predictive models. As we predict forward these uncertainties can saturate dominating our predictions. As a result we often only have a very vague notion of what is to come. This limit on our predictive ability places a fundamental limit on our ability to make intelligent decisions.
There was a time when people believed in perpetual motion machines (and quite a lot of effort was put into building them). Physical limitations of such machines were only understood in the late 19th century (for example the limit on efficiency of heat engines was theoretically formulated by Carnot). We dont yet know the theoretical limits of intelligence, but the intellectual gymnastics of some of the entities described in Superintelligence will likely be curtailed by the underlying mathematics. In practice the singularity will saturate, its just a question of where that saturation will occur relative to our current intelligence. Bostrom thinks it will be a long way ahead, I tend to agree but I dont think that the results will be as unimaginable as is made out. Machines are already a long way ahead of us in many areas (weather prediction for example) but I dont find that unimaginable either.
Unfortunately, in his own analysis, Bostrom hardly makes any use of uncertainty when envisaging future intelligences. In practice correct handling of uncertainty is critical in intelligent systems. By ignoring it Bostrom can give the impression that a superintelligence would act with unerving confidence. Indeed the only point where I recollect the mention of uncertainty is when it is used to unnerve us further. Bostrom refers to how he thinks a sensible Bayesian agent would respond to being given a particular goal. Bostrom suggests that due to uncertainty it would believe it might not have achieved its goal and continue to consume world resource in an effort to do so. In this respect the agent appears to be taking the inverse action of that suggested by the Greek skeptic Aenesidemus, who advocated suspension of judgment, or epoch, in the presence of uncertainty. Suspension of judgment (delay of decision making) meaning specifically refrain from action. That is indeed the intelligent reaction to uncertainty. Dont needlessly expend energy when the outcome is uncertain (to do so would contradict my definition of intelligent behavior). This idea emerges as optimal behavior from a mathematical treatment of such systems when uncertainty is incorporated.
This meme occurs through out the book. The savant idiot, a gifted intelligence that does a particular thing really stupidly. As such it contradicts the concept of superintelligence. The superintelligence is better in all ways than us, but then somehow must also be taught values and morals. Values and morals are part of our complex emergent human behaviour. Part of both our innate and our developed intelligence, both individually and collectively as a species. They are part of our natural conservatism that constrains extreme behavior. Constraints on extreme behaviour are necessary because of the general futility of absolute prediction. Just as in machining, we cannot achieve infinitely precise prediction.
Another way the savant idiot expresses itself in the book is through extreme confidence about its predictions in the future. The premise is that it will agressively follow a strategy (potentially to the severe detriment of humankind) in an effort to fulfill a defined final goal. Well address the mistaken idea of a simplistic final goal below.
With a shallow reading Bostroms ideas seem to provide an interesting narrative. In the manner of an Ian Fleming novel, the narrative is littered with technical detail to increase the plausibility for the reader. However, in the same way that so many of Blofelds schemes are quite fragile when exposed to deeper analysis, many of Bostroms ideas are as well.
In reality, challenges associated with abstracting the world render the future inherently unpredictable, both to humans and to our computers. Even when many aspects of a system are broadly understood (such as our weather) prediction far into the future is untenable due to propagation of uncertainty through the system. Uncertainty tends to inflate as time passes rendering only near term prediction plausible. Inherent to any intelligent behavior is an understanding of the limits of prediction. Intelligent behaviour withdraws, when appropriate, to the suspension of judgement, inactivity, the epoch. This simple idea finesses many of the challenges of artificial intelligence that Bostrom identifies.
Large sections of the book are dedicated to whole brain emulation, under the premise that this might be achievable before we have understood intelligence (superintelligence could then achieved by hitting the turbo button and running those brains faster). Simultaneously, hybrid brain-machine systems are rejected as a route forward due to the perceived difficulty of developing such interfaces.
Such unevenhanded treatment of future possible paths to AI makes the book a very frustrating read. If we had the level of understanding we need to fully emulate the brain, then we would know what is important to emulate in the brain to recreate intelligence. The path to that achievement would also involve improvements of our ability to directly interface with the brain. Given that there are immediate applications with patients, e.g. with spinal problems or suffering from ALS, I think we will have developed hybrid systems that interface directly with the brain a long time before we have managed a full emulation of the human brain. Indeed, such applications may prove to be critical to developing our understanding of how the brain implements intelligence.
Perhaps Bostroms naive premise about the ease of brain emulation comes form a lack of understanding of what it would involve. It could not involve an exact simulation of each neuron in the brain down to the quantum level (and if it did, it would be many orders of magnitude more computationally demanding than is suggested in the text). Instead it would involve some level of abstraction. Abstraction as to those aspects of the biochemistry and physics of the brain that are important in generating our intelligence. Modelling and simulation of the brain would require that our simulations replace actual mechanism with those salient parts of those mechanisms that the brain makes use of for intelligence.
As weve mentioned in the context of uncertainty, an understanding of this sort of abstraction is missing from Superintelligence, but it is vital in modelling, and, I believe, it is vital in intelligence. Such abstractions require a deep understanding of how the brain is working, and such understandings are exactly what Bostrom says are impossible to determine for developing hybrid systems.
Over the 30 year time horizons that Bostrom is interested in, hybrid human-machine systems could become very important. They are highly likely to arise before a full understanding of the brain is developed, and if they did then they would change the way society would evolve. Thats not to say that we wont experience societal challenges, but they are likely to be very different from the threats that Bostrom perceives. Importantly, when considering humans and computers, the line of separation between the two may not be as distinctly drawn as Bostrom suggests. It wouldnt be human vs computer, but augmented human vs computer.
One aspect that, it seems, must be hard to understand if youre not an active researcher is nature of technological advance at the cutting edge. The impression Bostrom gives is that research in AI is all a set of journeys with predefined goals. Its therefore merely a matter of assigning resources, planning, and navigating your way there. In his strategies for reacting to the potential dangers of AI, Bostrom suggests different areas in which we should focus our advances (which of these expeditions should we fund, and which should we impede). In reality, we cannot switch on and off research directions in such a simplistic manner. Most research in AI is less of an organized journey, but more of an exploration of uncharted terrain. You set sail from Spain with government backing and a vague notion of a shortcut to the spice trade of Asia, but instead you stumble on an unknown continent of gold-ridden cities. Even then you dont realize the truth of what you discovered within your own lifetime.
Even for the technologies that are within our reach, when we look to the past, we see that people were normally overly optimistic about how rapidly new advances could be deployed and assimilated by society. In the 1970s Xerox PARC focused on the idea that the office of the future would be paperless. It was a sensible projection, but before it came about (indeed its not quite here yet) there was an enormous proliferation of the use of paper, so the demand for paper increased.
Rather than the sudden arrival of the singleton, I suspect well experience something very similar to our journey to the paperless office with artificial intelligence technologies. As we develop AI further, we will likely require more sophistication from humans. For example, we wont be able to replace doctors immediately, first we will need doctors who have a more sophisticated understanding of data. Theyll need to interpret the results of, e.g., high resolution genetic testing. Theyll need to assimilate that understanding with their other knowledge. The hybrid human-machine nature of the emergence of artificial intelligence is given only sparse treatment by Bostrom. Perhaps because the narrative of such co-evolution is much more difficult to describe than an independent evolution.
The explorative nature of research adds to the uncertainties about where well be at any given time. Bostrom talks about how to control and guide our research in AI, but the inherent uncertainties require much more sophisticated thinking about control than Bostrom offers. In a stochastic system, a controller needs to be more intelligent and more reactive. The right action depends crucially on the time horizon. These horizons are unknown. Of course, that does not mean the research should be totally unregulated, but it means that those that suggest regulation need to be much closer to the nature of research and its capabilities. They need to work in collaboration with the community.
Arguments for large amounts of preparatory work for regulation are also undermined by the imprecision with which we can predict the nature of what will arrive and when it will come. In 1865 Jules Verne correctly envisaged that one day humans would reach the moon. However, the manner in which they reached the moon in his book proved very different from how we arrived in reality. Vernes idea was that wed do it using a very big gun. A good idea, but not correct. Verne was, however, correct that the Americans would get there first. One hundred and four years after he wrote the goal was achieved through rocket power (and without any chickens inside the capsule).
This is not to say that we shouldnt be concerned about the paths we are taking. There are many issues that the increasing use of algorithmic decision making raises and they need to be addressed. It is to say that the nature of the concerns that Bostrom raises are implausible because of the imprecision of our predictions over such time frames.
Some of Bostroms perspectives may also come from a lack of experience in deploying systems in practice. The book focuses a great deal on the programmed final goal of our artificial intelligences. It is true that most machine learning systems have objective functions, but an objective function doesnt really map very nicely to the idea of a final goal for an intelligent system. The objective functions we normally develop are really only effective for simplistic tasks, such as classification or regression. Perhaps the more complex notion of a reward in reinforcement learning is closer, but even then the reward tends to be task specific.
Arguably, if the system does have a simplistic final goal, then it is already failing its test of superintelligence, even the simplest human is a robust combination of, sometimes conflicting, goals that reflect the uncertainties around us. So if we are goal driven in our intelligence, then it is by sophisticated goals (akin to multi-objective optimisation) and each of us weights those goals according to sets of values that we each evolve, both across generations and within generations. We are sophisticated about our goals, rather than simplistic, because our environment itself is evolving, implying that our ways of behaviour need to evolve as well. Any AI with a simplistic final goal would fail the test of being a dominant intelligence. It would not be a superintelligence because it would under-perform humans in one or more critical aspects.
One of the routes explored by Bostrom to superintelligence involves speeding up implementations of our own intelligence. Such speed would not necessarily bring about significant advances in all domains of intelligence, due to fundamental limits on predictability. Linear improvements in speed cannot deal with exponential increases in computational tractability. But Bostrom also seems to assume that speeding up intelligences will necessarily take them beyond our comprehension or control. Of course in practice there are many examples where this is not the case. IBM Watsons won Jeopardy. But it did it by storing a lot more knowledge than we every could, then it used some simplistic techniques from language processing to recover those facts: it was a fancy search engine. These systems outperform us, but they are by no means beyond our comprehension. Still, that does not mean we shouldnt fear this phenomenon.
Given the quantity of data we are making available about our own behaviors and the rapid ability of computers to assimilate and intercommunicate, it is already conceivable that machines can predict our behavior better than we can. Not by superintelligence but by scaling up of simple systems. Theyve finessed the uncertainty by access to large quantities of data. These are the advances we should be wary of, yet they are not beyond our understanding. Such speeding up of compute and acquisition of large data is exactly what has led to the recent revolution in convolutional neural networks and recurrent neural networks. All our recent successes are just more compute and more data.
This brings me to another major omission of the book, and this one is ironic, because it is the fuel for the current breakthroughs in artificial intelligence. Those breakthroughs are driven by machine learning. And machine learning is driven by data. Very often our personal data. Machines do not need to exceed our capabilities in intelligence to have a highly significant social effect. They outperform us so greatly in their ability to process large volumes of data that they are able to second guess us without expressing any form of higher intelligence. This is not the future of AI, this is here today.
Deep neural networks of today are not performant because someone did something new and clever. Those methods did not work with the amount of data we had available in the 1990s. They work with the quantity of data we have now. They require a lot more data than any human uses to perform similar tasks. So already, the nature of the intelligence around us is data dominated. Any future advances will capitalise further on this phenomenon.
The data we have comes about because of rapid interconnectivity and high storage (this is connected to the low embodiment factor of the computer). It is the consequence of the successes of the past and it will feed the successes of the future. Because current AI breakthroughs are based on accumulation of personal data, there is opportunity to control its development by reformation of our rules on data.
Unfortunately, this most obvious route to our AI futures is not addressed at all in the book.
Debates about the future of AI and machine learning are very important for society. People need to be well informed so that they continue to retain their individual agency when making decisions about their lives.
I welcome the entry of philosophers to this debate, but I dont think Superintelligence is contributing as positively as it could have done to the challenges we face. In its current form many of its arguments are distractingly irrelevant.
I am not an apologist for machine learning, or a promoter of an unthinking march to algorithmic dominance. I have my own fears about how these methods will effect our society, and those fears are immediate. Bostroms book has the feel of an argument for doomsday prepping. But a challenge for all doomsday preppers is the quandary of exactly which doomsday they are preparing for. Problematically, if we become distracted with those images of Armageddon, we are in danger of ignoring existent challenges that urgently need to be addressed.
This is post 6 in a series. Previous post here
Original post:
Future of AI 6. Discussion of 'Superintelligence: Paths ...
- Superintelligence: Paths, Dangers, Strategies - Wikipedia ... [Last Updated On: June 13th, 2016] [Originally Added On: June 13th, 2016]
- Top Ten Cybernetic Upgrades Everyone Will Want [Last Updated On: June 17th, 2016] [Originally Added On: June 17th, 2016]
- Ethical Issues In Advanced Artificial Intelligence [Last Updated On: June 17th, 2016] [Originally Added On: June 17th, 2016]
- How Long Before Superintelligence? - Nick Bostrom [Last Updated On: June 17th, 2016] [Originally Added On: June 17th, 2016]
- Superintelligence - Wikipedia, the free encyclopedia [Last Updated On: June 17th, 2016] [Originally Added On: June 17th, 2016]
- Nick Bostrom's Home Page [Last Updated On: June 19th, 2016] [Originally Added On: June 19th, 2016]
- Superintelligence Audiobook | Nick Bostrom | Audible.com [Last Updated On: June 19th, 2016] [Originally Added On: June 19th, 2016]
- Superintelligence Audiobook | Nick Bostrom | Audible.com [Last Updated On: June 21st, 2016] [Originally Added On: June 21st, 2016]
- Superintelligence: Paths, Dangers, Strategies | KurzweilAI [Last Updated On: June 21st, 2016] [Originally Added On: June 21st, 2016]
- Superintelligence [Last Updated On: June 21st, 2016] [Originally Added On: June 21st, 2016]
- Superintelligence: Paths, Dangers, Strategies by Nick ... [Last Updated On: June 21st, 2016] [Originally Added On: June 21st, 2016]
- Superintelligence: Paths, Dangers, Strategies by Nick Bostrom ... [Last Updated On: June 25th, 2016] [Originally Added On: June 25th, 2016]
- Superintelligence: Paths, Dangers, Strategies | KurzweilAI [Last Updated On: June 25th, 2016] [Originally Added On: June 25th, 2016]
- Parallel universes, the Matrix, and superintelligence ... [Last Updated On: June 28th, 2016] [Originally Added On: June 28th, 2016]
- Superintelligence - Nick Bostrom - Oxford University Press [Last Updated On: July 14th, 2016] [Originally Added On: July 14th, 2016]
- 'Superintelligence' enjoyable read | Community ... [Last Updated On: July 29th, 2016] [Originally Added On: July 29th, 2016]
- How Humanity Might Co-Exist with Artificial Superintelligence [Last Updated On: July 31st, 2016] [Originally Added On: July 31st, 2016]
- Superintelligence by Nick Bostrom and A Rough Ride to the ... [Last Updated On: September 6th, 2016] [Originally Added On: September 6th, 2016]
- Superintelligence: paths, dangers, strategies | University ... [Last Updated On: October 17th, 2016] [Originally Added On: October 17th, 2016]
- Superintelligence: Paths, Dangers, Strategies: Amazon.co.uk ... [Last Updated On: October 27th, 2016] [Originally Added On: October 27th, 2016]
- Superintelligence | Guardian Bookshop [Last Updated On: October 27th, 2016] [Originally Added On: October 27th, 2016]
- The Artificial Intelligence Revolution: Part 2 - Wait But Why [Last Updated On: October 27th, 2016] [Originally Added On: October 27th, 2016]
- Superintelligence: Paths, Dangers, Strategies: Amazon.co ... [Last Updated On: November 17th, 2016] [Originally Added On: November 17th, 2016]
- Superintelligence: The Idea That Eats Smart People [Last Updated On: December 26th, 2016] [Originally Added On: December 26th, 2016]
- Will Machines Ever Outthink Us? - Huffington Post [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Elon Musk's Surprising Reason Why Everyone Will Be Equal in the ... - Big Think [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Experts have come up with 23 guidelines to avoid an AI apocalypse ... - ScienceAlert [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Stephen Hawking and Elon Musk Endorse 23 Asilomar Principles ... - Inverse [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- SoftBank's Fantastical Future Still Rooted in the Now - Wall Street Journal [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- The Moment When Humans Lose Control Of AI - Vocativ [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Game Theory: Google tests AIs to see whether they'll fight or work together - Neowin [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Simulation hypothesis: The smart person's guide - TechRepublic [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- Another Expert Joins Stephen Hawking and Elon Musk in Warning About the Dangers of AI - Futurism [Last Updated On: February 14th, 2017] [Originally Added On: February 14th, 2017]
- Artificial Intelligence Is Not a ThreatYet - Scientific American [Last Updated On: February 14th, 2017] [Originally Added On: February 14th, 2017]
- Elon Musk - 2 Things Humans Need to Do to Have a Good Future - Big Think [Last Updated On: February 26th, 2017] [Originally Added On: February 26th, 2017]
- Don't Fear Superintelligent AICCT News - CCT News [Last Updated On: February 26th, 2017] [Originally Added On: February 26th, 2017]
- Building A 'Collective Superintelligence' For Doctors And Patients Around The World - Forbes [Last Updated On: February 28th, 2017] [Originally Added On: February 28th, 2017]
- Superintelligent AI explains Softbank's push to raise a $100BN Vision Fund - TechCrunch [Last Updated On: February 28th, 2017] [Originally Added On: February 28th, 2017]
- Tech Leaders Raise Concern About the Dangers of AI - iDrop News [Last Updated On: March 1st, 2017] [Originally Added On: March 1st, 2017]
- Disruptive by Design: Siri, Tell Me a Joke. No, Not That One. - Signal Magazine [Last Updated On: March 1st, 2017] [Originally Added On: March 1st, 2017]
- Softbank CEO: The Singularity Will Happen by 2047 - Futurism [Last Updated On: March 1st, 2017] [Originally Added On: March 1st, 2017]
- Horst Simon to Present Supercomputers and Superintelligence at PASC17 in Lugano - insideHPC [Last Updated On: March 4th, 2017] [Originally Added On: March 4th, 2017]
- Why not all forms of artificial intelligence are equally scary - Vox [Last Updated On: March 8th, 2017] [Originally Added On: March 8th, 2017]
- US Navy reaches out to gamers to troubleshoot post-singularity world - Digital Trends [Last Updated On: March 19th, 2017] [Originally Added On: March 19th, 2017]
- This New Species of AI Wants to Be "Superintelligent" When She Grows Up - Big Think [Last Updated On: March 23rd, 2017] [Originally Added On: March 23rd, 2017]
- Luna, The Most Human-like AI, Wants To Become Superintelligent In Future - Fossbytes [Last Updated On: March 27th, 2017] [Originally Added On: March 27th, 2017]
- Friendly artificial intelligence - Wikipedia [Last Updated On: March 27th, 2017] [Originally Added On: March 27th, 2017]
- Banking bots should get their version of Asimov's Three Laws of Robotics - TNW [Last Updated On: March 29th, 2017] [Originally Added On: March 29th, 2017]
- The Nonparametric Intuition: Superintelligence and Design Methodology - Lifeboat Foundation (blog) [Last Updated On: April 7th, 2017] [Originally Added On: April 7th, 2017]
- Who is afraid of AI? - The Hindu [Last Updated On: April 7th, 2017] [Originally Added On: April 7th, 2017]
- Limits to the Nonparametric Intuition: Superintelligence and Ecology - Lifeboat Foundation (blog) [Last Updated On: April 12th, 2017] [Originally Added On: April 12th, 2017]
- The Guardian view on protein modelling: the answer to life, the universe and everything - The Guardian [Last Updated On: April 21st, 2017] [Originally Added On: April 21st, 2017]
- David Hasselhoff Stars in a New Short Filmand All His Lines Were Written by AI - Singularity Hub [Last Updated On: April 27th, 2017] [Originally Added On: April 27th, 2017]
- Apple's Tom Gruber, Co-Founder of Siri, Spoke at TED2017 Today about Augmented Memories and more - Patently Apple [Last Updated On: April 27th, 2017] [Originally Added On: April 27th, 2017]
- Superintelligence and Public Opinion - NewCo Shift [Last Updated On: April 27th, 2017] [Originally Added On: April 27th, 2017]
- Informatica Journal - Call for Special Issue on Superintelligence - Institute for Ethics and Emerging Technologies [Last Updated On: April 28th, 2017] [Originally Added On: April 28th, 2017]
- BRAVO 25: YOUR A.I. THERAPIST WILL SEE YOU NOW Comes to the Actors Company - Broadway World [Last Updated On: May 2nd, 2017] [Originally Added On: May 2nd, 2017]
- 'Artificial Superintelligence' is the First Game from the Makers of the Hilarious 'CARROT' Apps, Coming May 11th - Touch Arcade [Last Updated On: May 2nd, 2017] [Originally Added On: May 2nd, 2017]
- Multiple Intelligences, and Superintelligence - Freedom to Tinker [Last Updated On: May 6th, 2017] [Originally Added On: May 6th, 2017]
- You're invited: Strategies for an Artificially Superintelligent Future - FutureFive NZ [Last Updated On: May 11th, 2017] [Originally Added On: May 11th, 2017]
- U.S. Navy calls out to gamers for assistance with ... [Last Updated On: May 11th, 2017] [Originally Added On: May 11th, 2017]
- Artificial Superintelligence is an interesting Sci-Fi take on Reigns swiping mechanic - Pocket Gamer [Last Updated On: May 13th, 2017] [Originally Added On: May 13th, 2017]
- Listen, Meatbag! Artificial Superintelligence is a New Game Starring the Snarky Carrot AI - AppAdvice [Last Updated On: May 13th, 2017] [Originally Added On: May 13th, 2017]
- Artificial Superintelligence review - Reigns for a new generation - Pocket Gamer [Last Updated On: May 17th, 2017] [Originally Added On: May 17th, 2017]
- Artificial Superintelligence Review: Reigns Supreme? - Gamezebo [Last Updated On: May 18th, 2017] [Originally Added On: May 18th, 2017]
- Summoning the Demon: Why superintelligence is humanity's ... - GeekWire [Last Updated On: May 26th, 2017] [Originally Added On: May 26th, 2017]
- Summoning the Demon: Why superintelligence is humanity's biggest threat - GeekWire [Last Updated On: May 26th, 2017] [Originally Added On: May 26th, 2017]
- Today's Kids Could Live Through Machine Superintelligence, Martian Colonies, and a Nuclear Attack - Motherboard [Last Updated On: May 28th, 2017] [Originally Added On: May 28th, 2017]
- The AI Revolution: The Road to Superintelligence (PDF) [Last Updated On: June 3rd, 2017] [Originally Added On: June 3rd, 2017]
- A reply to Wait But Why on machine superintelligence [Last Updated On: June 3rd, 2017] [Originally Added On: June 3rd, 2017]
- Are You Ready for the AI Revolution and the Rise of Superintelligence? - TrendinTech [Last Updated On: June 7th, 2017] [Originally Added On: June 7th, 2017]
- Using AI to unlock human potential - EJ Insight [Last Updated On: June 9th, 2017] [Originally Added On: June 9th, 2017]
- Cars 3 gets back to what made the franchise adequate - Vox [Last Updated On: June 12th, 2017] [Originally Added On: June 12th, 2017]
- Facebook Chatbots Spontaneously Invent Their Own Non-Human ... - Interesting Engineering [Last Updated On: June 18th, 2017] [Originally Added On: June 18th, 2017]
- Effective Altruism Says You Can Save the Future by Making Money - Motherboard [Last Updated On: June 21st, 2017] [Originally Added On: June 21st, 2017]
- The bots are coming - The New Indian Express [Last Updated On: June 22nd, 2017] [Originally Added On: June 22nd, 2017]
- No need to fear Artificial Intelligence - Livemint - Livemint [Last Updated On: June 29th, 2017] [Originally Added On: June 29th, 2017]
- The AI Revolution: The Road to Superintelligence | Inverse [Last Updated On: July 3rd, 2017] [Originally Added On: July 3rd, 2017]
- Integrating disciplines 'key to dealing with digital revolution' - Times Higher Education (THE) [Last Updated On: July 4th, 2017] [Originally Added On: July 4th, 2017]
- To prevent artificial intelligence from going rogue, here is what Google is doing - Financial Express [Last Updated On: July 11th, 2017] [Originally Added On: July 11th, 2017]