Every step forward in artificial intelligence (AI) challenges assumptions about what machines can do. Myriad opportunities for economic benefit have created a stable flow of investment into AI research and development, but with the opportunities come risks to decision-making, security and governance. Increasingly intelligent systems supplanting both blue- and white-collar employees are exposing the fault lines in our economic and social systems and requiring policy-makers to look for measures that will build resilience to the impact of automation.
Leading entrepreneurs and scientists are also concerned about how to engineer intelligent systems as these systems begin implicitly taking on social obligations and responsibilities, and several of them penned an Open Letter on Research Priorities for Robust and Beneficial Artificial Intelligence in late 2015.1 Whether or not we are comfortable with AI may already be moot: more pertinent questions might be whether we can and ought to build trust in systems that can make decisions beyond human oversight that may have irreversible consequences.
By providing new information and improving decision-making through data-driven strategies, AI could potentially help to solve some of the complex global challenges of the 21st century, from climate change and resource utilization to the impact of population growth and healthcare issues. Start-ups specializing in AI applications received US$2.4 billion in venture capital funding globally in 2015 and more than US$1.5 billion in the first half of 2016.2 Government programmes and existing technology companies add further billions (Figure 3.2.1). Leading players are not just hiring from universities, they are hiring the universities: Amazon, Google and Microsoft have moved to funding professorships and directly acquiring university researchers in the search for competitive advantage.3
Machine learning techniques are now revealing valuable patterns in large data sets and adding value to enterprises by tackling problems at a scale beyond human capability. For example, Stanfords computational pathologist (C-Path) has highlighted unnoticed indicators for breast cancer by analysing thousands of cellular features on hundreds of tumour images,4 while DeepMind increased the power usage efficiency of Alphabet Inc.s data centres by 15%.5 AI applications can reduce costs and improve diagnostics with staggering speed and surprising creativity.
The generic term AI covers a wide range of capabilities and potential capabilities. Some serious thinkers fear that AI could one day pose an existential threat: a superintelligence might pursue goals that prove not to be aligned with the continued existence of humankind. Such fears relate to strong AI or artificial general intelligence (AGI), which would be the equivalent of human-level awareness, but which does not yet exist.6 Current AI applications are forms of weak or narrow AI or artificial specialized intelligence (ASI); they are directed at solving specific problems or taking actions within a limited set of parameters, some of which may be unknown and must be discovered and learned.
Tasks such as trading stocks, writing sports summaries, flying military planes and keeping a car within its lane on the highway are now all within the domain of ASI. As ASI applications expand, so do the risks of these applications operating in unforeseeable ways or outside the control of humans.7 The 2010 and 2015 stock market flash crashes illustrate how ASI applications can have unanticipated real-world impacts, while AlphaGo shows how ASI can surprise human experts with novel but effective tactics (Box 3.2.1). In combination with robotics, AI applications are already affecting employment and shaping risks related to social inequality.8
AI has great potential to augment human decision-making by countering cognitive biases and making rapid sense of extremely large data sets: at least one venture capital firm has already appointed an AI application to help determine its financial decisions.9 Gradually removing human oversight can increase efficiency and is necessary for some applications, such as automated vehicles. However, there are dangers in coming to depend entirely on the decisions of AI systems when we do not fully understand how the systems are making those decisions.10
by Jean-Marc Rickli, Geneva Centre for Security Policy
One sector that saw the huge disruptive potential of AI from an early stage is the military. The weaponization of AI will represent a paradigm shift in the way wars are fought, with profound consequences for international security and stability. Serious investment in autonomous weapon systems (AWS) began a few years ago; in July 2016 the Pentagons Defense Science Board published its first study on autonomy, but there is no consensus yet on how to regulate the development of these weapons.
The international community started to debate the emerging technology of lethal autonomous weapons systems (LAWS) in the framework of the United Nations Convention on Conventional Weapon (CCW) in 2014. Yet, so far, states have not agreed on how to proceed. Those calling for a ban on AWS fear that human beings will be removed from the loop, leaving decisions on the use lethal force to machines, with ramifications we do not yet understand.
There are lessons here from non-military applications of AI. Consider the example of AlphaGo, the AI Go-player created by Googles DeepMind division, which in March last year beat the worlds second-best human player. Some of AlphaGos moves puzzled observers, because they did not fit usual human patterns of play. DeepMind CEO Demis Hassabis explained the reason for this difference as follows: unlike humans, the AlphaGo program aims to maximize the probability of winning rather than optimizing margins. If this binary logic in which the only thing that matters is winning while the margin of victory is irrelevant were built into an autonomous weapons system, it would lead to the violation of the principle of proportionality, because the algorithm would see no difference between victories that required it to kill one adversary or 1,000.
Autonomous weapons systems will also have an impact on strategic stability. Since 1945, the global strategic balance has prioritized defensive systems a priority that has been conducive to stability because it has deterred attacks. However, the strategy of choice for AWS will be based on swarming, in which an adversarys defence system is overwhelmed with a concentrated barrage of coordinated simultaneous attacks. This risks upsetting the global equilibrium by neutralizing the defence systems on which it is founded. This would lead to a very unstable international configuration, encouraging escalation and arms races and the replacement of deterrence by pre-emption.
We may already have passed the tipping point for prohibiting the development of these weapons. An arms race in autonomous weapons systems is very likely in the near future. The international community should tackle this issue with the utmost urgency and seriousness because, once the first fully autonomous weapons are deployed, it will be too late to go back.
In any complex and chaotic system, including AI systems, potential dangers include mismanagement, design vulnerabilities, accidents and unforeseen occurrences.11 These pose serious challenges to ensuring the security and safety of individuals, governments and enterprises. It may be tolerable for a bug to cause an AI mobile phone application to freeze or misunderstand a request, for example, but when an AI weapons system or autonomous navigation system encounters a mistake in a line of code, the results could be lethal.
Machine-learning algorithms can also develop their own biases, depending on the data they analyse. For example, an experimental Twitter account run by an AI application ended up being taken down for making socially unacceptable remarks;12 search engine algorithms have also come under fire for undesirable race-related results.13 Decision-making that is either fully or partially dependent on AI systems will need to consider management protocols to avoid or remedy such outcomes.
AI systems in the Cloud are of particular concern because of issues of control and governance. Some experts propose that robust AI systems should run in a sandbox an experimental space disconnected from external systems but some cognitive services already depend on their connection to the internet. The AI legal assistant ROSS, for example, must have access to electronically available databases. IBMs Watson accesses electronic journals, delivers its services, and even teaches a university course via the internet.14 The data extraction program TextRunner is successful precisely because it is left to explore the web and draw its own conclusions unsupervised.15
On the other hand, AI can help solve cybersecurity challenges. Currently AI applications are used to spot cyberattacks and potential fraud in internet transactions. Whether AI applications are better at learning to attack or defend will determine whether online systems become more secure or more prone to successful cyberattacks.16 AI systems are already analysing vast amounts of data from phone applications and wearables; as sensors find their way into our appliances and clothing, maintaining security over our data and our accounts will become an even more crucial priority. In the physical world, AI systems are also being used in surveillance and monitoring analysing video and sound to spot crime, help with anti-terrorism and report unusual activity.17 How much they will come to reduce overall privacy is a real concern.
So far, AI development has occurred in the absence of almost any regulatory environment.18 As AI systems inhabit more technologies in daily life, calls for regulatory guidelines will increase. But can AI systems be sufficiently governed? Such governance would require multiple layers that include ethical standards, normative expectations of AI applications, implementation scenarios, and assessments of responsibility and accountability for actions taken by or on behalf of an autonomous AI system.
AI research and development presents issues that complicate standard approaches to governance, and can take place outside of traditional institutional frameworks, with both people and machines and in various locations. The developments in AI may not be well understood by policy-makers who do not have specialized knowledge of the field; and they may involve technologies that are not an issue on their own but that collectively present emergent properties that require attention.19 It would be difficult to regulate such things before they happen, and any unforeseeable consequences or control issues may be beyond governance once they occur (Box 3.2.2).
One option could be to regulate the technologies through which the systems work. For example, in response to the development of automated transportation that will require AI systems, the U.S. Department of Transportation has issued a 116 page policy guide.20 Although the policy guide does not address AI applications directly, it does put in place guidance frameworks for the developers of automated vehicles in terms of safety, control and testing.
Scholars, philosophers, futurists and tech enthusiasts vary in their predictions for the advent of artificial general intelligence (AGI), with timelines ranging from the 2030s to never. However, given the possibility of an AGI working out how to improve itself into a superintelligence, it may be prudent or even morally obligatory to consider potentially feasible scenarios, and how serious or even existential threats may be avoided.
The creation of AGI may depend on converging technologies and hybrid platforms. Much of human intelligence is developed by the use of a body and the occupation of physical space, and robotics provides such embodiment for experimental and exploratory AI applications. Proof-of-concept for muscle and braincomputer interfaces has already been established: Massachusetts Institute of Technology (MIT) scientists have shown that memories can be encoded in silicon,21 and Japanese researchers have used electroencephalogram (EEG) patterns to predict the next syllable someone will say with up to 90% accuracy, which may lead to the ability to control machines simply by thinking.22
Superintelligence could potentially also be achieved by augmenting human intelligence through smart systems, biotech, and robotics rather than by being embodied in a computational or robotic form.23 Potential barriers to integrating humans with intelligence-augmenting technology include peoples cognitive load, physical acceptance and concepts of personal identity.24 Should these challenges be overcome, keeping watch over the state of converging technologies will become an ever more important task as AI capabilities grow and fuse with other technologies and organisms.
Advances in computing technologies such as quantum computing, parallel systems, and neurosynaptic computing research may create new opportunities for AI applications or unleash new unforeseen behaviours in computing systems.25 New computing technologies are already having an impact: for instance, IBMs TrueNorth chip with a design inspired by the human brain and built for exascale computing already has contracts from Lawrence Livermore National Laboratory in California to work on nuclear weapons security.26 While adding great benefit to scenario modelling today, the possibility of a superintelligence could turn this into a risk.
by Stuart Russell, University of California, Berkeley
Few in the field believe that there are intrinsic limits to machine intelligence, and even fewer argue for self-imposed limits. Thus it is prudent to anticipate the possibility that machines will exceed human capabilities, as Alan Turing posited in 1951: If a machine can think, it might think more intelligently than we do. [T]his new danger is certainly something which can give us anxiety.
So far, the most general approach to creating generally intelligent machines is to provide them with our desired objectives and with algorithms for finding ways to achieve those objectives. Unfortunately, we may not specify our objectives in such a complete and well-calibrated fashion that a machine cannot find an undesirable way to achieve them. This is known as the value alignment problem, or the King Midas problem. Turing suggested turning off the power at strategic moments as a possible solution to discovering that a machine is misaligned with our true objectives, but a superintelligent machine is likely to have taken steps to prevent interruptions to its power supply.
How can we define problems in such a way that any solution the machine finds will be provably beneficial? One idea is to give a machine the objective of maximizing the true human objective, but without initially specifying that true objective: the machine has to gradually resolve its uncertainty by observing human actions, which reveal information about the true objective. This uncertainty should avoid the single-minded and potentially catastrophic pursuit of a partial or erroneous objective. It might even persuade a machine to leave open the possibility of allowing itself to be switched off.
There are complications: humans are irrational, inconsistent, weak-willed, computationally limited and heterogeneous, all of which conspire to make learning about human values from human behaviour a difficult (and perhaps not totally desirable) enterprise. However, these ideas provide a glimmer of hope that an engineering discipline can be developed around provably beneficial systems, allowing a safe way forward for AI. Near-term developments such as intelligent personal assistants and domestic robots will provide opportunities to develop incentives for AI systems to learn value alignment: assistants that book employees into US$20,000-a-night suites and robots that cook the cat for the family dinner are unlikely to prove popular.
Both existing ASI systems and the plausibility of AGI demand mature consideration. Major firms such as Microsoft, Google, IBM, Facebook and Amazon have formed the Partnership on Artificial Intelligence to Benefit People and Society to focus on ethical issues and helping the public better understand AI.27 AI will become ever more integrated into daily life as businesses employ it in applications to provide interactive digital interfaces and services, increase efficiencies and lower costs.28 Superintelligent systems remain, for now, only a theoretical threat, but artificial intelligence is here to stay and it makes sense to see whether it can help us to create a better future. To ensure that AI stays within the boundaries that we set for it, we must continue to grapple with building trust in systems that will transform our social, political and business environments, make decisions for us, and become an indispensable faculty for interpreting the world around us.
Chapter 3.2 was contributed by Nicholas Davis, World Economic Forum, and Thomas Philbeck, World Economic Forum.
Armstrong, S. 2014. Smarter than Us: The Rise of Machine Intelligence. Berkeley, CA: Machine Intelligence Research Institute.
Bloomberg. 2016. Boston Marathon Security: Can A.I. Predict Crimes? Bloomberg News, Video, 21 April 2016. http://www.bloomberg.com/news/videos/b/d260fb95-751b-43d5-ab8d-26ca87fa8b83
Bostrom, N. 2014. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press.
CB Insights. 2016. Artificial intelligence explodes: New deal activity record for AI startups. Blog, 20 June 2016. https://www.cbinsights.com/blog/artificial-intelligence-funding-trends/
Chiel, E. 2016. Black teenagers vs. white teenagers: Why Googles algorithm displays racist results. Fusion, 10 June 2016. http://fusion.net/story/312527/google-image-search-algorithm-three-black-teenagers-vs-three-white-teenagers/
Clark, J. 2016. Google cuts its giant electricity bill with deepmind-powered AI. Bloomberg Technology, 19 July 2016. http://www.bloomberg.com/news/articles/2016-07-19/google-cuts-its-giant-electricity-bill-with-deepmind-powered-ai
Cohen, J. 2013. Memory implants: A maverick neuroscientist believes he has deciphered the code by which the brain forms long-term memories. MIT Technology Review. https://www.technologyreview.com/s/513681/memory-implants/
Frey, C. B. and M. A. Osborne. 2015. Technology at work: The future of innovation and employment. Citi GPS: Global Perspectives & Solutions, February 2015. http://www.oxfordmartin.ox.ac.uk/downloads/reports/Citi_GPS_Technology_Work.pdf
Hern, A. 2016. Partnership on AI formed by Google, Facebook, Amazon, IBM and Microsoft. The Guardian Online, 28 September 2016. https://www.theguardian.com/technology/2016/sep/28/google-facebook-amazon-ibm-microsoft-partnership-on-ai-tech-firms
Hunt, E. 2016. Tay, Microsofts AI chatbot, gets a crash course in racism from Twitter. The Guardian, 24 March 2016. https://www.theguardian.com/technology/2016/mar/24/tay-microsofts-ai-chatbot-gets-a-crash-course-in-racism-from-twitter
Kelly, A. 2016. Will Artificial Intelligence read your mind? Scientific research analyzes brainwaves to predict words before you speak. iDigital Times, 9 January 2016. http://www.idigitaltimes.com/will-artificial-intelligence-read-your-mind-scientific-research-analyzes-brainwaves-502730
Kime, B. 3 Chatbots to deploy in your busines. VentureBeat, 1 October 2016. http://venturebeat.com/2016/10/01/3-chatbots-to-deploy-in-your-business/
Lawrence Livermore National Laboratory. 2016. Lawrence Livermore and IBM collaborate to build new brain-inspired supercomputer, Press release, 29 March 2016. https://www.llnl.gov/news/lawrence-livermore-and-ibm-collaborate-build-new-brain-inspired-supercomputer
Maderer, J. 2016. Artificial Intelligence course creates AI teaching assistant. Georgia Tech News Center, 9 May 2016. http://www.news.gatech.edu/2016/05/09/artificial-intelligence-course-creates-ai-teaching-assistant
Martin, M. 2012. C-Path: Updating the art of pathology. Journal of the National Cancer Institute 104 (16): 120204. http://jnci.oxfordjournals.org/content/104/16/1202.full
Mizroch, A. 2015. Artificial-intelligence experts are in high demand. Wall Street Journal Online, 1 May 2015. http://www.wsj.com/articles/artificial-intelligence-experts-are-in-high-demand-1430472782
Russell, S., D. Dewey, and M. Tegmark. 2015. Research priorities for a robust and beneficial artificial intelligence. AI Magazine Winter 2015: 10514.
Scherer, M. U. 2016. Regulating Artificial Intelligence systems: Risks, challenges, competencies, and strategies. Harvard Journal of Law & Technology 29 (2): 35498.
Sherpany. 2016. Artificial Intelligence: Bringing machines into the boardroom, 21 April 2016. https://www.sherpany.com/en/blog/2016/04/21/artificial-intelligence-bringing-machines-boardroom/
Talbot, D. 2009. Extracting meaning from millions of pages. MIT Technology Review, 10 June 2009. https://www.technologyreview.com/s/413767/extracting-meaning-from-millions-of-pages/
Turing, A. M. 1951. Can digital machines think? Lecture broadcast on BBC Third Programme; typescript at turingarchive.org
U.S. Department of Transportation. 2016. Federal Automated Vehicles Policy September 2016. Washington, DC: U.S. Department of Transportation. https://www.transportation.gov/AV/federal-automated-vehicles-policy-september-2016
Wallach, W. 2015. A Dangerous Master. New York: Basic Books.
Yirka, B. 2016. Researchers create organic nanowire synaptic transistors that emulate the working principles of biological synapses. TechXplore, 20 June 2016. https://techxplore.com/news/2016-06-nanowire-synaptic-transistors-emulate-principles.html
More:
Global Risks Report 2017 - Reports - World Economic Forum
- Ethical Issues In Advanced Artificial Intelligence [Last Updated On: December 14th, 2016] [Originally Added On: December 14th, 2016]
- How Long Before Superintelligence? - Nick Bostrom [Last Updated On: December 21st, 2016] [Originally Added On: December 21st, 2016]
- Parallel universes, the Matrix, and superintelligence ... [Last Updated On: January 4th, 2017] [Originally Added On: January 4th, 2017]
- The Artificial Intelligence Revolution: Part 2 - Wait But Why [Last Updated On: January 25th, 2017] [Originally Added On: January 25th, 2017]
- Superintelligence - Nick Bostrom - Oxford University Press [Last Updated On: January 27th, 2017] [Originally Added On: January 27th, 2017]
- Experts have come up with 23 guidelines to avoid an AI apocalypse ... - ScienceAlert [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- Elon Musk's Surprising Reason Why Everyone Will Be Equal in the ... - Big Think [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- Stephen Hawking and Elon Musk Endorse 23 Asilomar Principles ... - Inverse [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- SoftBank's Fantastical Future Still Rooted in the Now - Wall Street Journal [Last Updated On: February 8th, 2017] [Originally Added On: February 8th, 2017]
- The Moment When Humans Lose Control Of AI - Vocativ [Last Updated On: February 8th, 2017] [Originally Added On: February 8th, 2017]
- Game Theory: Google tests AIs to see whether they'll fight or work together - Neowin [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Simulation hypothesis: The smart person's guide - TechRepublic [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Elon Musk jokes 'I'm not an alien' while discussing how to contact extraterrestrials - Yahoo News [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- Another Expert Joins Stephen Hawking and Elon Musk in Warning About the Dangers of AI - Futurism [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- Artificial intelligence predictions surpass reality - UT The Daily Texan [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- Elon Musk - 2 Things Humans Need to Do to Have a Good Future - Big Think [Last Updated On: February 27th, 2017] [Originally Added On: February 27th, 2017]
- Don't Fear Superintelligent AICCT News - CCT News [Last Updated On: February 27th, 2017] [Originally Added On: February 27th, 2017]
- Superintelligent AI explains Softbank's push to raise a $100BN ... - TechCrunch [Last Updated On: February 28th, 2017] [Originally Added On: February 28th, 2017]
- Building A 'Collective Superintelligence' For Doctors And Patients Around The World - Forbes [Last Updated On: February 28th, 2017] [Originally Added On: February 28th, 2017]
- Tech Leaders Raise Concern About the Dangers of AI - iDrop News [Last Updated On: March 2nd, 2017] [Originally Added On: March 2nd, 2017]
- Softbank CEO: The Singularity Will Happen by 2047 - Futurism [Last Updated On: March 2nd, 2017] [Originally Added On: March 2nd, 2017]
- Disruptive by Design: Siri, Tell Me a Joke. No, Not That One. - Signal Magazine [Last Updated On: March 2nd, 2017] [Originally Added On: March 2nd, 2017]
- Horst Simon to Present Supercomputers and Superintelligence at PASC17 in Lugano - insideHPC [Last Updated On: March 3rd, 2017] [Originally Added On: March 3rd, 2017]
- Supersentience [Last Updated On: March 7th, 2017] [Originally Added On: March 7th, 2017]
- Superintelligence | Guardian Bookshop [Last Updated On: March 7th, 2017] [Originally Added On: March 7th, 2017]
- The AI debate must stay grounded in reality - Prospect [Last Updated On: March 8th, 2017] [Originally Added On: March 8th, 2017]
- Who is afraid of AI? - The Hindu [Last Updated On: April 8th, 2017] [Originally Added On: April 8th, 2017]
- The Nonparametric Intuition: Superintelligence and Design Methodology - Lifeboat Foundation (blog) [Last Updated On: April 8th, 2017] [Originally Added On: April 8th, 2017]
- How humans will lose control of artificial intelligence - The Week Magazine [Last Updated On: April 8th, 2017] [Originally Added On: April 8th, 2017]
- The AI Revolution: The Road to Superintelligence (PDF) [Last Updated On: June 6th, 2017] [Originally Added On: June 6th, 2017]
- A reply to Wait But Why on machine superintelligence [Last Updated On: June 6th, 2017] [Originally Added On: June 6th, 2017]
- Are You Ready for the AI Revolution and the Rise of Superintelligence? - TrendinTech [Last Updated On: June 7th, 2017] [Originally Added On: June 7th, 2017]
- Using AI to unlock human potential - EJ Insight [Last Updated On: June 9th, 2017] [Originally Added On: June 9th, 2017]
- Cars 3 gets back to what made the franchise adequate - Vox [Last Updated On: June 13th, 2017] [Originally Added On: June 13th, 2017]
- Facebook Chatbots Spontaneously Invent Their Own Non-Human ... - Interesting Engineering [Last Updated On: June 17th, 2017] [Originally Added On: June 17th, 2017]
- U.S. Navy reaches out to gamers to troubleshoot post ... [Last Updated On: June 18th, 2017] [Originally Added On: June 18th, 2017]
- The bots are coming - The New Indian Express [Last Updated On: June 20th, 2017] [Originally Added On: June 20th, 2017]
- Effective Altruism Says You Can Save the Future by Making Money - Motherboard [Last Updated On: June 20th, 2017] [Originally Added On: June 20th, 2017]
- No need to fear Artificial Intelligence - Livemint - Livemint [Last Updated On: June 30th, 2017] [Originally Added On: June 30th, 2017]
- Integrating disciplines 'key to dealing with digital revolution' | Times ... - Times Higher Education (THE) [Last Updated On: July 4th, 2017] [Originally Added On: July 4th, 2017]
- What an artificial intelligence researcher fears about AI - Huron Daily ... - Huron Daily Tribune [Last Updated On: July 16th, 2017] [Originally Added On: July 16th, 2017]
- AI researcher: Why should a superintelligence keep us around? - TNW [Last Updated On: July 18th, 2017] [Originally Added On: July 18th, 2017]
- Giving Up the Fags: A Self-Reflexive Speech on Critical Auto-ethnography About the Shame of Growing up Gay/Sexual ... - The Good Men Project (blog) [Last Updated On: July 22nd, 2017] [Originally Added On: July 22nd, 2017]
- Will we be wiped out by machine overlords? Maybe we need a ... - PBS NewsHour [Last Updated On: July 22nd, 2017] [Originally Added On: July 22nd, 2017]
- The end of humanity as we know it is 'coming in 2045' and Google is preparing for it - Metro [Last Updated On: July 27th, 2017] [Originally Added On: July 27th, 2017]
- The Musk/Zuckerberg Dustup Represents a Growing Schism in AI - Motherboard [Last Updated On: August 4th, 2017] [Originally Added On: August 4th, 2017]
- Infographic: Visualizing the Massive $15.7 Trillion Impact of AI - Visual Capitalist (blog) [Last Updated On: August 25th, 2017] [Originally Added On: August 25th, 2017]
- Being human in the age of artificial intelligence - Science Weekly podcast - The Guardian [Last Updated On: August 25th, 2017] [Originally Added On: August 25th, 2017]
- Why won't everyone listen to Elon Musk about the robot apocalypse? - Ladders [Last Updated On: August 25th, 2017] [Originally Added On: August 25th, 2017]
- Friendly artificial intelligence - Wikipedia [Last Updated On: August 25th, 2017] [Originally Added On: August 25th, 2017]
- The AI Revolution: The Road to Superintelligence | Inverse [Last Updated On: August 25th, 2017] [Originally Added On: August 25th, 2017]
- Steam Workshop :: Superintelligence [Last Updated On: February 15th, 2018] [Originally Added On: February 15th, 2018]
- Steam Workshop :: Superintelligence (BNW) [Last Updated On: February 15th, 2018] [Originally Added On: February 15th, 2018]
- Artificial Superintelligence: The Coming Revolution ... [Last Updated On: June 3rd, 2018] [Originally Added On: June 3rd, 2018]
- Superintelligence survey - Future of Life Institute [Last Updated On: June 23rd, 2018] [Originally Added On: June 23rd, 2018]
- Amazon.com: Superintelligence: Paths, Dangers, Strategies ... [Last Updated On: August 18th, 2018] [Originally Added On: August 18th, 2018]
- Superintelligence: From Chapter Eight of Films from the ... [Last Updated On: October 11th, 2018] [Originally Added On: October 11th, 2018]
- Superintelligence - Hardcover - Nick Bostrom - Oxford ... [Last Updated On: March 6th, 2019] [Originally Added On: March 6th, 2019]
- Superintelligence: Paths, Dangers, Strategies - Wikipedia [Last Updated On: May 3rd, 2019] [Originally Added On: May 3rd, 2019]
- Elon Musk warns 'advanced A.I.' will soon manipulate social media - Big Think [Last Updated On: October 1st, 2019] [Originally Added On: October 1st, 2019]
- Aquinas' Fifth Way: The Proof from Specification - Discovery Institute [Last Updated On: October 22nd, 2019] [Originally Added On: October 22nd, 2019]
- The Best Artificial Intelligence Books you Need to Read Today - Edgy Labs [Last Updated On: October 22nd, 2019] [Originally Added On: October 22nd, 2019]
- Here's How to Watch Watchmen, HBOs Next Game of Thrones - Cosmopolitan [Last Updated On: October 24th, 2019] [Originally Added On: October 24th, 2019]
- Idiot Box: HBO Max joins the flood of streaming services - Weekly Alibi [Last Updated On: October 24th, 2019] [Originally Added On: October 24th, 2019]
- AMC Is Still In the Theater Business, But VOD Is a Funny Way of Showing It - IndieWire [Last Updated On: October 24th, 2019] [Originally Added On: October 24th, 2019]
- Melissa McCarthy And Ben Falcone Have Decided To Release 'Superintelligence' Via HBO Max Ins - Science Fiction [Last Updated On: October 24th, 2019] [Originally Added On: October 24th, 2019]
- Melissa McCarthy & Director Ben Falcone On Choosing HBO Max Bow Instead Of WB Xmas Release For Superintelligence - Deadline [Last Updated On: October 24th, 2019] [Originally Added On: October 24th, 2019]
- Doubting The AI Mystics: Dramatic Predictions About AI Obscure Its Concrete Benefits - Forbes [Last Updated On: December 5th, 2019] [Originally Added On: December 5th, 2019]
- AI R&D is booming, but general intelligence is still out of reach - The Verge [Last Updated On: December 18th, 2019] [Originally Added On: December 18th, 2019]
- Playing Tetris Shows That True AI Is Impossible - Walter Bradley Center for Natural and Artificial Intelligence [Last Updated On: December 21st, 2019] [Originally Added On: December 21st, 2019]
- Liquid metal tendons could give robots the ability to heal themselves - Digital Trends [Last Updated On: December 21st, 2019] [Originally Added On: December 21st, 2019]
- NIU expert: 4 leaps in technology to expect in the 2020s | NIU - NIU Newsroom [Last Updated On: December 21st, 2019] [Originally Added On: December 21st, 2019]
- Thinking Beyond Flesh and Bones with AI - Ghana Latest Football News, Live Scores, Results - Ghanasoccernet.com [Last Updated On: February 18th, 2020] [Originally Added On: February 18th, 2020]
- Elon Musk dings Bill Gates and says their conversations were underwhelming, after the Microsoft billionaire buys an electric Porsche - Pulse Nigeria [Last Updated On: February 18th, 2020] [Originally Added On: February 18th, 2020]
- Is Artificial Intelligence (AI) A Threat To Humans? - Forbes [Last Updated On: March 4th, 2020] [Originally Added On: March 4th, 2020]
- The world's best virology lab isn't where you think - Spectator.co.uk [Last Updated On: April 3rd, 2020] [Originally Added On: April 3rd, 2020]
- Josiah Henson: the forgotten story in the history of slavery - The Guardian [Last Updated On: June 21st, 2020] [Originally Added On: June 21st, 2020]
- The Shadow of Progress - Merion West [Last Updated On: July 13th, 2020] [Originally Added On: July 13th, 2020]
- If you can't beat 'em, join 'em Elon Musk tweets out the mission statement for his AI-brain-chip Neuralink - Business Insider India [Last Updated On: July 13th, 2020] [Originally Added On: July 13th, 2020]
- Consciousness Existing Beyond Matter, Or in the Central Nervous System as an Afterthought of Nature? - The Daily Galaxy --Great Discoveries Channel [Last Updated On: July 17th, 2020] [Originally Added On: July 17th, 2020]