The concept of strong Artificial Intelligence (AI), or AI that is cognitively equivalent to (or better than) a human in all areas of intelligence, is a common science fiction trope.[1] From HALs adversarial relationship with Dave in Stanley Kubricks film 2001: A Space Odyssey[2] to the war-ravaged apocalypse of James Camerons Terminator[3] franchise, Hollywood has vividly imagined what a dystopian future with super intelligent machines could look like and what the ultimate outcome for humanity might be. While I would not argue that the invention of super-intelligent machines will inevitably lead to our Schwarzenegger-style destruction, rapid advances in AI and machine learning have raised the specter of strong AI instantiation within a lifetime,[4] and this requires serious consideration. It is becoming increasingly important that we have a real conversation about strong AI before it becomes an existential issue, particularly within the context of decision making for kinetic autonomous weapons and other military systems that can result in a lethal outcome. From these discussions, appropriate global norms and international laws should be established to prevent the proliferation and use of strong AI systems for kinetic operations.
With the invention of almost every new technology, changes to ethical norms surrounding its appropriate use lag significantly behind proliferation. Consider social media as an example. We imagined that social media platforms would bring people together and facilitate greater communication and community, yet the reality has become significantly less sanguine.[5] Instead of bringing people together, social media has deepened social fissures and enabled the proliferation of disinformation at a virulent rate. It has torn families apart, caused greater divide, and at times transformed the very definition of truth.[6] Only now are we considering ethical restraints on social media to prevent the poison from spreading.[7] It is highly probable that any technology we create will ultimately reflect the darker parts of our nature, unless we create ethical limits before the technology becomes ubiquitous. It would be foolish to believe that AI would be an exception to this rule. This becomes especially important when considering strong AI designed for warfare, which is distinguishable from other forms of artificial intelligence.
To fully examine the implications of strong AI, we need to understand how it differs from current AI technologies, which are what we would consider weak AI.[8] Your smartphones ability to recognize images of your face is an example of weak AI. For a military example, an algorithm that can recognize a tank in an aerial video would be considered a weak AI system.[9] It can identify and label tanks, but it does not really know what a tank is or have any cognizance of how it relates to a tank. In contrast, a strong AI would be capable of the same task (as well as parallel tasks) with human-level proficiency (or beyond), but with an awareness of its own mind. This makes strong AI a more unpredictable threat. Not only would strong AI be highly proficient at rapidly processing battlefield data for pre- and post-strike decision making, but it would do so with an awareness of itself and its own motives, whatever they might be. Proliferation of weak AI systems for military applications is already becoming a significant issue. As an anecdotal example, Vladimir Putin has stated that the nation that leads AI will be the ruler of the world.[10] Imagine what the outcome could be if military AI systems had their own motives. This would likely involve catastrophic failure modes beyond what could be realized from weak AI systems. Thus, military applications of strong AI deserve their own consideration.
At this point, one may be tempted to dismiss strong AI as being highly improbable and therefore not worth considering. Given the rapid pace of AI technology development, it could be argued that, while the precise probability of instantiating strong AI is unknown,[11] it is a safe assumption that it is greater than zero. But what is important in this case is not the probability of strong AI instantiation, but the severity of a realized risk. To understand this, one need only consider how animals of greater intelligence typically consider animals of lesser intelligence. Ponder this scenario: when we have ants in our garden, does their well-being ever cross our minds? From our perspective, the moral value of an insect is insignificant in relation to our goals, thus we would not hesitate to obliterate them simply for eating our tomatoes. Now imagine if we encountered a significantly more intelligent AI how might it consider us in relation to its goals, whatever they might be? This meeting could yield an existential crisis if our existence hinders the AIs goal achievement, thus even this low-probability event could have a catastrophic outcome if it became a reality.
Understanding what might motivate a strong AI could provide some insight into how it might relate to us in such a situation. Human motivation is an evolved phenomenon. Everything that drives us (self-preservation, hunger, sex, desire for community, accumulation of resources, etc.) exists to facilitate our survival and that of our kin.[12] Even higher-order motives, like self-actualization, can be linked to the more fundamental goal of individual and species survival when viewed through the lens of evolutionary psychology.[13] However, a strong AI would not necessarily have evolved. It may simply be instantiated in situ as software or hardware. In this case, no evolutionary force would have existed over eons to generate a motivational framework analogous to what we, as humans, experience. In an instantiated strong AI, it might be prudent to assume that the AIs primary motive would be to achieve whatever goal it was initially programmed to do. Thus, self-preservation might not be the primary motivating factor. However, the AI would probably recognize that its continued existence is necessary for it to achieve its primary goal, thus self-preservation could become a meaningful sub-goal.[14] Other sub-goals may also exist, some of which would not be obvious to humans in the context of how we understand motivation. The AIs thought process by which sub-goals are generated or achieved might be significantly different from what humans would expect.
The existence of AI sub-goals that do not follow the patterns of human motivation implies the existence of a strong AI creative process that may be completely alien to us. One only needs to look at AI-generated art to see that AI creativity can manifest itself in often grotesque ways that are vastly different from what a human might expect.[15] While weird AI artistry hardly poses an existential threat to humanity, it illustrates the concept of perverse instantiation,[16] where the AI achieves a goal, but in an unexpected and potentially malignant way. As a military example, imagine a strong AI whose primary goal is to degrade and destroy the adversary. As we have demonstrated, AI creativity can be unbounded in its weirdness, as its thought processes are unlike that of any evolved intelligence. This AI might find a creative and completely unforeseen way to achieve its primary goal that leads to significant collateral damage against non-combatants, such as innocent civilians. Taking this analogy to a darker level, the AI might determine that a useful sub-goal would be to remove its military handlers from the equation. Perhaps they act as a man in the middle gatekeeper in affecting the AIs will, and the AI determines that this arrangement creates unacceptable inefficiencies. In this perverse instantiation, the AI achieves its goal of destroying the enemy, but in a grotesque way by killing its overseers.
The next obvious question is, how could we contain a strong AI in a way that would prevent malignant failure? The obvious solution might be in engineering a deontological ethic an Asimovian set of rules to limit the AIs behavior.[17] Considering a strong AIs tendency toward unpredictable creativity in methods of goal achievement, encoding an exhaustive set of rules would pose a titanic challenge. Additionally, deontological ethics is often subject to deontological failure, e.g., what happens when rules contradict one another? A classic example would be the trolly problem: if an AI is not allowed to kill a human, but the only two possible choices involve the death of humans, which choice does it make?[18] This is already an issue in weak AI, specifically with self-driving cars.[19] Does the vehicle run over a small child who crosses the road, or crash and kill its inhabitants, if those are the only possible choices? If deontological ethics are an imperfect option, perhaps AI disembodiment would be a viable solution. In this scenario, the AI would lack a means to directly interact with its environment, acting as sort of an oracle in a box.[20] The AI would advise its human handlers, who would act as ethical gatekeepers in affecting the AIs will. Upon cursory examination, this seems plausible, but we have already established that a strong AI might determine that a man in the middle arrangement degrades its ability to achieve its primary goal, so what would prevent the AI from coercing its handlers into enabling its escape? In our hubris, we would like to believe that we could not be outsmarted by a disembodied AI, but a being that is more intelligent than us could reasonably outsmart us just as easily as a savvy adult could a nave child.
While a single strong AI instantiation could pose a significant risk of malignant failure, imagine the impact that the proliferation of strong AI military systems might have on how we approach war. Our adversaries are earnestly exploring AI for military applications; thus, it is extremely likely that strong AI may become a reality and also proliferate.[21] The real problem becomes not how to prevent malignant failure of a single strong AI, but how to address the complex adaptive system of multiple strong AIs fighting against all logical actors, none of which exhibit reasonably predictable behavior.[22] To further complicate matters, ethical decision making is influenced by culture, and our adversaries might have different ideas as to which strong AI behaviors are acceptable during war, and which are not.
To avoid this potentially disastrous outcome, I propose the following be considered for further discussion with the hopeful end-goal of appropriate global norms and future international laws that ban strong AI decision making for kinetic offensive operations. Strong AI-based lethal autonomous weapons should be considered a weapon of mass destruction. This may be the best way to prevent the complex, unpredictable destruction that could arise from multiple strong AI systems intent on killing the enemy or unnecessarily wreaking havoc on critical infrastructure, which may have negative secondary and tertiary effects impacting countless innocent non-combatants. Inevitably, there may be rogue or non-signatory actors who develop weaponized strong AI systems despite international norms. Any strategy that addresses strong AI should also consider this potential outcome.
Several years ago, seriously discussing strong AI might get you laughed out of the room. Today, as AI continues to advance, and as our adversaries continue to aggressively militarize AI technologies, it is imperative that the United States consider a defense strategy specifically addressing the possibility of a strong AI instantiation. Any use of strong AI in the battlefield should be limited to non-kinetic operations to reduce the impact of malignant failure. This standard should be reflected in multilateral treaty agreements or protocols to prevent strong AI misuse and the inevitable unpredictability of adversarial strong AI systems interacting with each other in complex, unpredictable, and possibly horrific ways. This may be a sufficient way to ensure that weaponized strong AI does not cause cataclysmic devastation.
The author is responsible for the content of this article. The views expressed do not reflect the official policy or position of the National Intelligence University, the Department of Defense, the U.S. Intelligence Community, or the U.S. Government.
(Visited 371 times, 1 visits today)
Original post:
- Classic reasoning systems like Loom and PowerLoom vs. more modern systems based on probalistic networks [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Using Amazon's cloud service for computationally expensive calculations [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Software environments for working on AI projects [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- New version of my NLP toolkit [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Semantic Web: through the back door with HTML and CSS [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Java FastTag part of speech tagger is now released under the LGPL [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Defining AI and Knowledge Engineering [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Great Overview of Knowledge Representation [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Something like Google page rank for semantic web URIs [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- My experiences writing AI software for vehicle control in games and virtual reality systems [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- The URL for this blog has changed [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- I have a new page on Knowledge Management [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- N-GRAM analysis using Ruby [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Good video: Knowledge Representation and the Semantic Web [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Using the PowerLoom reasoning system with JRuby [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Machines Like Us [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- RapidMiner machine learning, data mining, and visualization tool [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- texai.org [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- NLTK: The Natural Language Toolkit [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- My OpenCalais Ruby client library [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Ruby API for accessing Freebase/Metaweb structured data [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Protégé OWL Ontology Editor [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- New version of Numenta software is available [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Very nice: Elsevier IJCAI AI Journal articles now available for free as PDFs [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Verison 2.0 of OpenCyc is available [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- What’s Your Biggest Question about Artificial Intelligence? [Article] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Minimax Search [Knowledge] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Decision Tree [Knowledge] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- More AI Content & Format Preference Poll [Article] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- New Planners Solve Rescue Missions [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Neural Network Learns to Bluff at Poker [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Pushing the Limits of Game AI Technology [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Mining Data for the Netflix Prize [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Interview with Peter Denning on the Principles of Computing [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Decision Making for Medical Support [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Neural Network Creates Music CD [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- jKilavuz - a guide in the polygon soup [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Artificial General Intelligence: Now Is the Time [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Apply AI 2007 Roundtable Report [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- What Would You do With 80 Cores? [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Software Finds Learning Language Child's Play [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Artificial Intelligence in Games [Article] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Artificial Intelligence Resources [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Alan Turing: Mathematical Biologist? [Last Updated On: April 25th, 2012] [Originally Added On: April 25th, 2012]
- BBC Horizon: The Hunt for AI ( Artificial Intelligence ) - Video [Last Updated On: April 30th, 2012] [Originally Added On: April 30th, 2012]
- Can computers have true artificial intelligence" Masonic handshake" 3rd-April-2012 - Video [Last Updated On: April 30th, 2012] [Originally Added On: April 30th, 2012]
- Kevin B. Korb - Interview - Artificial Intelligence and the Singularity p3 - Video [Last Updated On: April 30th, 2012] [Originally Added On: April 30th, 2012]
- Artificial Intelligence - 6 Month Anniversary - Video [Last Updated On: April 30th, 2012] [Originally Added On: April 30th, 2012]
- Science Breakthroughs [Last Updated On: April 30th, 2012] [Originally Added On: April 30th, 2012]
- Hitman: Blood Money - Part 49 - Stupid Artificial Intelligence! - Video [Last Updated On: April 30th, 2012] [Originally Added On: April 30th, 2012]
- Research Members Turned Off By HAARP Artificial Intelligence - Video [Last Updated On: April 30th, 2012] [Originally Added On: April 30th, 2012]
- Artificial Intelligence Lecture No. 5 - Video [Last Updated On: April 30th, 2012] [Originally Added On: April 30th, 2012]
- The Artificial Intelligence Laboratory, 2012 - Video [Last Updated On: April 30th, 2012] [Originally Added On: April 30th, 2012]
- Charlie Rose - Artificial Intelligence - Video [Last Updated On: April 30th, 2012] [Originally Added On: April 30th, 2012]
- Expert on artificial intelligence to speak at EPIIC Nights dinner [Last Updated On: May 4th, 2012] [Originally Added On: May 4th, 2012]
- Filipino software engineers complete and best thousands on Stanford’s Artificial Intelligence Course [Last Updated On: May 4th, 2012] [Originally Added On: May 4th, 2012]
- Vodafone xone™ Hackathon Challenges Developers and Entrepreneurs to Build a New Generation of Artificial Intelligence ... [Last Updated On: May 4th, 2012] [Originally Added On: May 4th, 2012]
- Rocket Fuel Packages Up CPG Booster [Last Updated On: May 4th, 2012] [Originally Added On: May 4th, 2012]
- 2 Filipinos finishes among top in Stanford’s Artificial Intelligence course [Last Updated On: May 5th, 2012] [Originally Added On: May 5th, 2012]
- Why Your Brain Isn't A Computer [Last Updated On: May 5th, 2012] [Originally Added On: May 5th, 2012]
- 2 Pinoy software engineers complete Stanford's AI course [Last Updated On: May 7th, 2012] [Originally Added On: May 7th, 2012]
- Percipio Media, LLC Proudly Accepts Partnership With MIT's Prestigious Computer Science And Artificial Intelligence ... [Last Updated On: May 10th, 2012] [Originally Added On: May 10th, 2012]
- Google Driverless Car Ok'd by Nevada [Last Updated On: May 10th, 2012] [Originally Added On: May 10th, 2012]
- Moving Beyond the Marketing Funnel: Rocket Fuel and Forrester Research Announce Free Webinar [Last Updated On: May 10th, 2012] [Originally Added On: May 10th, 2012]
- Rocket Fuel Wins 2012 San Francisco Business Times Tech & Innovation Award [Last Updated On: May 13th, 2012] [Originally Added On: May 13th, 2012]
- Internet Week 2012: Rocket Fuel to Speak at OMMA RTB [Last Updated On: May 16th, 2012] [Originally Added On: May 16th, 2012]
- How to Get the Most Out of Your Facebook Ads -- Rocket Fuel's VP of Products, Eshwar Belani, to Lead MarketingProfs ... [Last Updated On: May 16th, 2012] [Originally Added On: May 16th, 2012]
- The Digital Disruptor To Banking Has Just Gone International [Last Updated On: May 16th, 2012] [Originally Added On: May 16th, 2012]
- Moving Beyond the Marketing Funnel: Rocket Fuel Announce Free Webinar Featuring an Independent Research Firm [Last Updated On: May 23rd, 2012] [Originally Added On: May 23rd, 2012]
- MASA Showcases Latest Version of MASA SWORD for Homeland Security Markets [Last Updated On: May 23rd, 2012] [Originally Added On: May 23rd, 2012]
- Bluesky Launches Drones for Aerial Surveying [Last Updated On: May 23rd, 2012] [Originally Added On: May 23rd, 2012]
- Artificial Intelligence: What happened to the hunt for thinking machines? [Last Updated On: May 25th, 2012] [Originally Added On: May 25th, 2012]
- Bubble Robots Move Using Lasers [VIDEO] [Last Updated On: May 25th, 2012] [Originally Added On: May 25th, 2012]
- UHV assistant professors receive $10,000 summer research grants [Last Updated On: May 27th, 2012] [Originally Added On: May 27th, 2012]
- Artificial intelligence: science fiction or simply science? [Last Updated On: May 28th, 2012] [Originally Added On: May 28th, 2012]
- Exetel taps artificial intelligence [Last Updated On: May 29th, 2012] [Originally Added On: May 29th, 2012]
- Software offers brain on the rain [Last Updated On: May 29th, 2012] [Originally Added On: May 29th, 2012]
- New Dean of Science has high hopes for his faculty [Last Updated On: May 30th, 2012] [Originally Added On: May 30th, 2012]
- Cognitive Code Announces "Silvia For Android" App [Last Updated On: May 31st, 2012] [Originally Added On: May 31st, 2012]
- A Rat is Smarter Than Google [Last Updated On: June 5th, 2012] [Originally Added On: June 5th, 2012]