The "nuclear football" follows the president on trips. It allows the president to authorize a nuclear launch.
If artificial intelligences controlled nuclear weapons, all of us could be dead.
That is no exaggeration. In 1983, Soviet Air Defense Forces Lieutenant Colonel Stanislav Petrov was monitoring nuclear early warning systems, when the computer concluded with the highest confidence that the United States had launched a nuclear war. But Petrov was doubtful: The computer estimated only a handful of nuclear weapons were incoming, when such a surprise attack would more plausibly entail an overwhelming first strike. He also didnt trust the new launch detection system, and the radar system didnt have corroborative evidence. Petrov decided the message was a false positive and did nothing. The computer was wrong; Petrov was right. The false signals came from the early warning system mistaking the suns reflection off the clouds for missiles. But if Petrov had been a machine, programmed to respond automatically when confidence was sufficiently high, that error would have started a nuclear war.
Militaries are increasingly incorporating autonomous functions into weapons systems, though as far as is publicly known, they havent yet turned the nuclear launch codes over to an AI system. Russia has developed a nuclear-armed, nuclear-powered torpedo that is autonomous in some not publicly known manner, and defense thinkers in the United States have proposed automating the launch decision for nuclear weapons.
There is no guarantee that some military wont put AI in charge of nuclear launches; International law doesnt specify that there should always be a Petrov guarding the button. Thats something that should change, soon.
How autonomous nuclear weapons could go wrong. The huge problem with autonomous nuclear weapons, and really all autonomous weapons, is error. Machine learning-based artificial intelligencesthe current AI voguerely on large amounts of data to perform a task. Googles AlphaGo program beat the worlds greatest human go players, experts at the ancient Chinese game thats even more complex than chess, by playing millions of games against itself to learn the game. For a constrained game like Go, that worked well. But in the real world, data may be biased or incomplete in all sorts of ways. For example, one hiring algorithm concluded being named Jared and playing high school lacrosse was the most reliable indicator of job performance, probably because it picked up on human biases in the data.
In a nuclear weapons context, a government may have little data about adversary military platforms; existing data may be structurally biased, by, for example, relying on satellite imagery; or data may not account for obvious, expected variations such as imagery in taken during foggy, rainy, or overcast weather.
The nature of nuclear conflict compounds the problem of error.
How would a nuclear weapons AI even be trained? Nuclear weapons have only been used twice in Hiroshima and Nagasaki, and serious nuclear crises are (thankfully) infrequent. Perhaps inferences can be drawn from adversary nuclear doctrine, plans, acquisition patterns, and operational activity, but the lack of actual examples of nuclear conflict means judging the quality of those inferences is impossible. While a lack of examples hinders humans too, humans have the capacity for higher-order reasoning. Humans can create theories and identify generalities from limited information or information that is analogous, but not equivalent. Machines cannot.
The deeper challenge is high false positive rates in predicting rare events. There have thankfully been only two nuclear attacks in history. An autonomous system designed to detect and retaliate against an incoming nuclear weapon, even if highly accurate, will frequently exhibit false positives. Around the world, in North Korea, Iran, and elsewhere, test missiles are fired into the sea and rockets are launched into the atmosphere. And there have been many false alarms of nuclear attacks, vastly more than actual attacks. An AI thats right almost all the time still has a lot of opportunity to get it wrong. Similarly, with a test that accurately diagnosed cases of a rare disease 99 percent of the time, a positive diagnosis may meanjusta5 percent likelihood of actually having the disease, depending on assumptions about the diseases prevalence and false positive rates. This is because with rare diseases, the number of false positives could vastlyoutweighthe number of true positives.So, if an autonomous nuclear weapon concluded with 99 percent confidence a nuclear war is about to begin, should it fire?
In the extremely unlikely event those problems can all be solved, autonomous nuclear weapons introduce new risks of error and opportunities for bad actors to manipulate systems. Current AI is not only brittle; its easy to fool. A single pixel change is enough to convince an AI a stealth bomber is a dog. This creates two problems. If a country actually sought a nuclear war, they could fool the AI system first, rendering it useless. Or a well-resourced, apocalyptic terrorist organization like the Japanese cult Aum Shinrikyo might attempt to trick an adversarys system into starting a catalytic nuclear war. Both approaches can be done in quite subtle, difficult-to-detect ways: data poisoning may manipulate the training data that feeds the AI system, or unmanned systems or emitters could be used to trick an AI into believing a nuclear strike is incoming.
The risk of error can confound well-laid nuclear strategies and plans. If a military had to start a nuclear war, targeting an enemys own nuclear systems with gigantic force would be a good way to go to limit retaliation. However, if an AI launched a nuclear weapon in error, the decisive opening salvo may be a pittancea single nuclear weapon aimed at a less than ideal target. Accidentally nuking a major city might provoke an overwhelming nuclear retaliation because the adversary would still have all its missile silos, just not its city.
Some have nonetheless argued that autonomous weapons (not necessarily autonomous nuclear weapons) will eventually reduce the risk of error. Machines do not need to protect themselves and can be more conservative in making decisions to use force. They do not have emotions that cloud their judgement and do not exhibit confirmation biasa type of bias in which people interpret data in a way that conforms to their desires or beliefs.
While these arguments have potential merit in conventional warfare, depending on how technology evolves, they do not in nuclear warfare. As strategic deterrents, countries have strong incentives to protect their nuclear weapons platforms, because they literally safeguard their existence. Instead of being risk avoidant, countries have an incentive to preemptively launch under attack, because otherwise they may lose their nuclear weapons. Some emotion should also be a part of nuclear decision-making: the prospect of catastrophic nuclear war should be terrifying, and the decision made extremely cautiously.
Finally, while autonomous nuclear weapons may not exhibit confirmation biases, the lack of training data and real-world test environments mean an autonomous nuclear weapon may experience numerous biases, which may never be discovered until after a nuclear war has started.
The decision to unleash nuclear force is the single most significant decision a leader can make. It commits a state to an existential conflict with millionsif not billionsof lives in the balance. Such a consequential, deeply human decision should never be made by a computer.
Activists against autonomous weapons have been hesitant to focus on autonomous nuclear weapons. For example, the International Committee of the Red Cross makes no mention of autonomous nuclear weapons in its position statement on autonomous weapons. (In fairness, the International Committee for Robot Arms Controls 2009 statement references autonomous nuclear weapons, though it represents more of the intellectual wing of the so-called stop killer robots movement.) Perhaps activists see nuclear weapons as already broadly banned or do not wish to legitimize nuclear weapons generally, but the lack of attention is a mistake. Nuclear weapons already have broad established norms against their use and proliferation, with numerous treaties supporting them. Banning autonomous nuclear weapons should be an easy win to establish norms against autonomous weapons. Plus, autonomous nuclear weapons represent perhaps the highest-risk manifestation of autonomous weapons (an artificial superintelligence is the only potential higher risk). Which is worse: an autonomous gun turret accidently killing a civilian, or an autonomous nuclear weapon igniting a nuclear war that leads to catastrophic destruction and possibly the extinction of all humanity? Hint: catastrophic destruction is vastly worse.
Where autonomous nuclear weapons stand. Some autonomy in nuclear weapons is already here, but its complicated and unclear how worried we should be.
Russias Poseidon is an Intercontinental Nuclear-Powered Nuclear-Armed Autonomous Torpedo according to US Navy documents, while the Congressional Research Service has also described it as an autonomous undersea vehicle. The weapon is intended to be a second-strike weapon used in the event of a nuclear conflict. That is, a weapon intended to ensure a state can always retaliate against a nuclear strike, even an unexpected, so-called bolt from the blue. An unanswered question is: what can the Poseidon do autonomously? Perhaps the torpedo just has some autonomous maneuvering ability to better reach its targetbasically, an underwater cruise missile. Thats probably not a big deal, though there may be some risk of error in misdirecting the attack.
It is more worrisome if the torpedo is given permission to attack autonomously under specific conditions. For example, what if, in a crisis scenario where Russian leadership fears a possible nuclear attack, Poseidon torpedoes are launched under a loiter mode? It could be that if the Poseidon loses communications with its host submarine, it launches an attack. Most worrisome: The torpedo has the ability to attack on its own, but this possibility is quite unlikely. This would require an independent means for the Poseidon to assess whether a nuclear attack had taken place, while sitting far beneath the ocean. Of course, given how little is known about the Poseidon, this is all speculation. But thats part of the point: understanding how another countrys autonomous systems operate is really hard.
Countries are also interested in so-called dead hand systems. Dead hand systems are meant to provide a back-up, in case a states nuclear command authority is disrupted, or killed. A relatively simple system like Russias Perimeter might delegate launch authority to a lower-level commander in the event of a crisis and specific conditions like a loss of communication with command authorities. But as deterrence experts Adam Lowther and Curtis McGuffin argued in a 2019 article in War on the Rocks, the United States should consider an automated strategic response system based on artificial intelligence.
The authors reason the decision-making time to launch nuclear weapons has become so constrained, that an artificial intelligence-based dead hand should be considered, despite, as the authors acknowledge, the potential for numerous errors and problems the system would create. Lt. Gen. Jack Shanahan, former leader of the Department of Defenses Joint Artificial Intelligence Center, shot the proposal down immediately: You will find no stronger proponent of integration of AI capabilities writ large into the Department of Defense, but there is one area where I pause, and it has to do withnuclear command and control. But Shanahan retired in 2020, and there is no reason to believe the proposal will not come up again. Perhaps next time, no one will shoot it down.
What needs to happen. As allowed under Article VIII of the Nuclear Non-Proliferation Treaty, a member state should propose an amendment to the treaty requiring all nuclear weapons states to always include humans within decision-making chains on the use of nuclear weapons. This could require diplomacy and might take a while. In the near term, countries should raise the issue when the member states next meet to review the treaty in August 2022 and establish a side-event focused on autonomous nuclear weapons issues during the 2025 conference. Even if a consensus cannot be established at the 2022 conference, countries can begin the process of working through any barriers in support of a future amendment. Countries can also build consensus outside the review conference process: Bans on autonomous nuclear weapons could be discussed as part of broader multilateral discussions on a new autonomous weapons ban.
The United States should be a leader in this effort. The congressionally-appointed National Security Commission on AI recommended humans maintain control over nuclear weapons. Page 12 notes, The United States should (1) clearly and publicly affirm existing US policy that only human beings can authorize employment of nuclear weapons and seek similar commitments from Russia and China. Formalizing this requirement in international law would make it far more robust.
Unfortunately, requiring humans to make decisions on firing nuclear weapons is not the end of the story. An obvious challenge is how to ensure the commitments to human control are trustworthy. After all, it is quite tough to tell whether a weapon is truly autonomous. But there might be options to at least reassure: Countries could pass laws requiring humans to approve decisions on the use of nuclear weapons; provide minimum transparency into nuclear command and control processes to demonstrate meaningful human control; or issue blanket bans on any research and development aimed at making nuclear weapons autonomous.
Now, none of this should suggest that any fusion of artificial intelligence and nuclear weapons is terrifying. Or, more precisely, any more terrifying than nuclear weapons on their own. Artificial intelligence also has applications in situational awareness, intelligence collection, information processing, and improving weapons accuracy. Artificial intelligence may aid decision support and communication reliability, which may help nuclear stability. In fact, artificial intelligence has already been incorporated in various aspects of nuclear command, control, and communication systems, such as early warning systems. But that should never extend to complete machine control over the decision to use nuclear weapons.
The challenge of autonomous nuclear weapons is a serious one that has gotten little attention. Making changes to the Nuclear Non-Proliferation Treaty to require nuclear weapons states to maintain human control over nuclear weapons is just the start. At the very least, if a nuclear war breaks out, well know who to blame.
The author would like to thank Philipp C. Bleek, James Johnson, and Josh Pollack for providing invaluable input on this article.
Go here to read the rest:
- Superintelligence: Paths, Dangers, Strategies - Wikipedia ... [Last Updated On: June 13th, 2016] [Originally Added On: June 13th, 2016]
- Top Ten Cybernetic Upgrades Everyone Will Want [Last Updated On: June 17th, 2016] [Originally Added On: June 17th, 2016]
- Ethical Issues In Advanced Artificial Intelligence [Last Updated On: June 17th, 2016] [Originally Added On: June 17th, 2016]
- How Long Before Superintelligence? - Nick Bostrom [Last Updated On: June 17th, 2016] [Originally Added On: June 17th, 2016]
- Superintelligence - Wikipedia, the free encyclopedia [Last Updated On: June 17th, 2016] [Originally Added On: June 17th, 2016]
- Nick Bostrom's Home Page [Last Updated On: June 19th, 2016] [Originally Added On: June 19th, 2016]
- Superintelligence Audiobook | Nick Bostrom | Audible.com [Last Updated On: June 19th, 2016] [Originally Added On: June 19th, 2016]
- Superintelligence Audiobook | Nick Bostrom | Audible.com [Last Updated On: June 21st, 2016] [Originally Added On: June 21st, 2016]
- Superintelligence: Paths, Dangers, Strategies | KurzweilAI [Last Updated On: June 21st, 2016] [Originally Added On: June 21st, 2016]
- Superintelligence [Last Updated On: June 21st, 2016] [Originally Added On: June 21st, 2016]
- Superintelligence: Paths, Dangers, Strategies by Nick ... [Last Updated On: June 21st, 2016] [Originally Added On: June 21st, 2016]
- Superintelligence: Paths, Dangers, Strategies by Nick Bostrom ... [Last Updated On: June 25th, 2016] [Originally Added On: June 25th, 2016]
- Superintelligence: Paths, Dangers, Strategies | KurzweilAI [Last Updated On: June 25th, 2016] [Originally Added On: June 25th, 2016]
- Parallel universes, the Matrix, and superintelligence ... [Last Updated On: June 28th, 2016] [Originally Added On: June 28th, 2016]
- Superintelligence - Nick Bostrom - Oxford University Press [Last Updated On: July 14th, 2016] [Originally Added On: July 14th, 2016]
- 'Superintelligence' enjoyable read | Community ... [Last Updated On: July 29th, 2016] [Originally Added On: July 29th, 2016]
- How Humanity Might Co-Exist with Artificial Superintelligence [Last Updated On: July 31st, 2016] [Originally Added On: July 31st, 2016]
- Future of AI 6. Discussion of 'Superintelligence: Paths ... [Last Updated On: August 10th, 2016] [Originally Added On: August 10th, 2016]
- Superintelligence by Nick Bostrom and A Rough Ride to the ... [Last Updated On: September 6th, 2016] [Originally Added On: September 6th, 2016]
- Superintelligence: paths, dangers, strategies | University ... [Last Updated On: October 17th, 2016] [Originally Added On: October 17th, 2016]
- Superintelligence: Paths, Dangers, Strategies: Amazon.co.uk ... [Last Updated On: October 27th, 2016] [Originally Added On: October 27th, 2016]
- Superintelligence | Guardian Bookshop [Last Updated On: October 27th, 2016] [Originally Added On: October 27th, 2016]
- The Artificial Intelligence Revolution: Part 2 - Wait But Why [Last Updated On: October 27th, 2016] [Originally Added On: October 27th, 2016]
- Superintelligence: Paths, Dangers, Strategies: Amazon.co ... [Last Updated On: November 17th, 2016] [Originally Added On: November 17th, 2016]
- Superintelligence: The Idea That Eats Smart People [Last Updated On: December 26th, 2016] [Originally Added On: December 26th, 2016]
- Will Machines Ever Outthink Us? - Huffington Post [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Elon Musk's Surprising Reason Why Everyone Will Be Equal in the ... - Big Think [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Experts have come up with 23 guidelines to avoid an AI apocalypse ... - ScienceAlert [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Stephen Hawking and Elon Musk Endorse 23 Asilomar Principles ... - Inverse [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- SoftBank's Fantastical Future Still Rooted in the Now - Wall Street Journal [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- The Moment When Humans Lose Control Of AI - Vocativ [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Game Theory: Google tests AIs to see whether they'll fight or work together - Neowin [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Simulation hypothesis: The smart person's guide - TechRepublic [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- Another Expert Joins Stephen Hawking and Elon Musk in Warning About the Dangers of AI - Futurism [Last Updated On: February 14th, 2017] [Originally Added On: February 14th, 2017]
- Artificial Intelligence Is Not a ThreatYet - Scientific American [Last Updated On: February 14th, 2017] [Originally Added On: February 14th, 2017]
- Elon Musk - 2 Things Humans Need to Do to Have a Good Future - Big Think [Last Updated On: February 26th, 2017] [Originally Added On: February 26th, 2017]
- Don't Fear Superintelligent AICCT News - CCT News [Last Updated On: February 26th, 2017] [Originally Added On: February 26th, 2017]
- Building A 'Collective Superintelligence' For Doctors And Patients Around The World - Forbes [Last Updated On: February 28th, 2017] [Originally Added On: February 28th, 2017]
- Superintelligent AI explains Softbank's push to raise a $100BN Vision Fund - TechCrunch [Last Updated On: February 28th, 2017] [Originally Added On: February 28th, 2017]
- Tech Leaders Raise Concern About the Dangers of AI - iDrop News [Last Updated On: March 1st, 2017] [Originally Added On: March 1st, 2017]
- Disruptive by Design: Siri, Tell Me a Joke. No, Not That One. - Signal Magazine [Last Updated On: March 1st, 2017] [Originally Added On: March 1st, 2017]
- Softbank CEO: The Singularity Will Happen by 2047 - Futurism [Last Updated On: March 1st, 2017] [Originally Added On: March 1st, 2017]
- Horst Simon to Present Supercomputers and Superintelligence at PASC17 in Lugano - insideHPC [Last Updated On: March 4th, 2017] [Originally Added On: March 4th, 2017]
- Why not all forms of artificial intelligence are equally scary - Vox [Last Updated On: March 8th, 2017] [Originally Added On: March 8th, 2017]
- US Navy reaches out to gamers to troubleshoot post-singularity world - Digital Trends [Last Updated On: March 19th, 2017] [Originally Added On: March 19th, 2017]
- This New Species of AI Wants to Be "Superintelligent" When She Grows Up - Big Think [Last Updated On: March 23rd, 2017] [Originally Added On: March 23rd, 2017]
- Luna, The Most Human-like AI, Wants To Become Superintelligent In Future - Fossbytes [Last Updated On: March 27th, 2017] [Originally Added On: March 27th, 2017]
- Friendly artificial intelligence - Wikipedia [Last Updated On: March 27th, 2017] [Originally Added On: March 27th, 2017]
- Banking bots should get their version of Asimov's Three Laws of Robotics - TNW [Last Updated On: March 29th, 2017] [Originally Added On: March 29th, 2017]
- The Nonparametric Intuition: Superintelligence and Design Methodology - Lifeboat Foundation (blog) [Last Updated On: April 7th, 2017] [Originally Added On: April 7th, 2017]
- Who is afraid of AI? - The Hindu [Last Updated On: April 7th, 2017] [Originally Added On: April 7th, 2017]
- Limits to the Nonparametric Intuition: Superintelligence and Ecology - Lifeboat Foundation (blog) [Last Updated On: April 12th, 2017] [Originally Added On: April 12th, 2017]
- The Guardian view on protein modelling: the answer to life, the universe and everything - The Guardian [Last Updated On: April 21st, 2017] [Originally Added On: April 21st, 2017]
- David Hasselhoff Stars in a New Short Filmand All His Lines Were Written by AI - Singularity Hub [Last Updated On: April 27th, 2017] [Originally Added On: April 27th, 2017]
- Apple's Tom Gruber, Co-Founder of Siri, Spoke at TED2017 Today about Augmented Memories and more - Patently Apple [Last Updated On: April 27th, 2017] [Originally Added On: April 27th, 2017]
- Superintelligence and Public Opinion - NewCo Shift [Last Updated On: April 27th, 2017] [Originally Added On: April 27th, 2017]
- Informatica Journal - Call for Special Issue on Superintelligence - Institute for Ethics and Emerging Technologies [Last Updated On: April 28th, 2017] [Originally Added On: April 28th, 2017]
- BRAVO 25: YOUR A.I. THERAPIST WILL SEE YOU NOW Comes to the Actors Company - Broadway World [Last Updated On: May 2nd, 2017] [Originally Added On: May 2nd, 2017]
- 'Artificial Superintelligence' is the First Game from the Makers of the Hilarious 'CARROT' Apps, Coming May 11th - Touch Arcade [Last Updated On: May 2nd, 2017] [Originally Added On: May 2nd, 2017]
- Multiple Intelligences, and Superintelligence - Freedom to Tinker [Last Updated On: May 6th, 2017] [Originally Added On: May 6th, 2017]
- You're invited: Strategies for an Artificially Superintelligent Future - FutureFive NZ [Last Updated On: May 11th, 2017] [Originally Added On: May 11th, 2017]
- U.S. Navy calls out to gamers for assistance with ... [Last Updated On: May 11th, 2017] [Originally Added On: May 11th, 2017]
- Artificial Superintelligence is an interesting Sci-Fi take on Reigns swiping mechanic - Pocket Gamer [Last Updated On: May 13th, 2017] [Originally Added On: May 13th, 2017]
- Listen, Meatbag! Artificial Superintelligence is a New Game Starring the Snarky Carrot AI - AppAdvice [Last Updated On: May 13th, 2017] [Originally Added On: May 13th, 2017]
- Artificial Superintelligence review - Reigns for a new generation - Pocket Gamer [Last Updated On: May 17th, 2017] [Originally Added On: May 17th, 2017]
- Artificial Superintelligence Review: Reigns Supreme? - Gamezebo [Last Updated On: May 18th, 2017] [Originally Added On: May 18th, 2017]
- Summoning the Demon: Why superintelligence is humanity's ... - GeekWire [Last Updated On: May 26th, 2017] [Originally Added On: May 26th, 2017]
- Summoning the Demon: Why superintelligence is humanity's biggest threat - GeekWire [Last Updated On: May 26th, 2017] [Originally Added On: May 26th, 2017]
- Today's Kids Could Live Through Machine Superintelligence, Martian Colonies, and a Nuclear Attack - Motherboard [Last Updated On: May 28th, 2017] [Originally Added On: May 28th, 2017]
- The AI Revolution: The Road to Superintelligence (PDF) [Last Updated On: June 3rd, 2017] [Originally Added On: June 3rd, 2017]
- A reply to Wait But Why on machine superintelligence [Last Updated On: June 3rd, 2017] [Originally Added On: June 3rd, 2017]
- Are You Ready for the AI Revolution and the Rise of Superintelligence? - TrendinTech [Last Updated On: June 7th, 2017] [Originally Added On: June 7th, 2017]
- Using AI to unlock human potential - EJ Insight [Last Updated On: June 9th, 2017] [Originally Added On: June 9th, 2017]
- Cars 3 gets back to what made the franchise adequate - Vox [Last Updated On: June 12th, 2017] [Originally Added On: June 12th, 2017]
- Facebook Chatbots Spontaneously Invent Their Own Non-Human ... - Interesting Engineering [Last Updated On: June 18th, 2017] [Originally Added On: June 18th, 2017]
- Effective Altruism Says You Can Save the Future by Making Money - Motherboard [Last Updated On: June 21st, 2017] [Originally Added On: June 21st, 2017]
- The bots are coming - The New Indian Express [Last Updated On: June 22nd, 2017] [Originally Added On: June 22nd, 2017]
- No need to fear Artificial Intelligence - Livemint - Livemint [Last Updated On: June 29th, 2017] [Originally Added On: June 29th, 2017]
- The AI Revolution: The Road to Superintelligence | Inverse [Last Updated On: July 3rd, 2017] [Originally Added On: July 3rd, 2017]
- Integrating disciplines 'key to dealing with digital revolution' - Times Higher Education (THE) [Last Updated On: July 4th, 2017] [Originally Added On: July 4th, 2017]