This article was submitted in response to thecall for ideas issued by the co-chairs of the National Security Commission on Artificial Intelligence, Eric Schmidt and Robert Work. It addresses the fourth question (part a.) which asks what international norms for artificial intelligence should the United States lead in developing, and whether it is possible to create mechanisms for the development and enforcement of AI norms.
In 1953, President Dwight Eisenhower asked the world to join him in building a framework for Atoms for Peace. He made the case for a global agreement to prevent the spread of nuclear weapons while also sharing the peaceful uses of nuclear technology for power, agriculture, and medicine. No one would argue the program completely prevented the spread of weapons technology: India and Pakistan used technology gained through Atoms for Peace in their nascent nuclear weapons programs. But it made for a safer world by paving the way for a system of inspections and controls on nuclear facilities, including the establishment of the International Atomic Energy Agency and, later, the widespread ratification of the Treaty on the Nonproliferation of Nuclear Weapons (NPT). These steps were crucial for building what became known as the nuclear nonproliferation regime.
The world stands at a similar juncture today at the dawn of the age of artificial intelligence (AI).The United States shouldapply lessonsfrom the 70-year history of governing nuclear technology by building a framework for governing AI military technology.
What would AI for Peace look like? The nature of AI is different than nuclear technology, but some of the principles that underpinned the nonproliferation regime can be applied to combat the dangers of AI. Government, the private sector, and academia can work together to bridge national divides. Scientists and technologists not just traditional policymakers will be instrumental in providing guidance about how to govern new technology. At a diplomatic level, sharing the peaceful benefits of technology can encourage countries to open themselves up to inspection and controls. And even countries that are competitors can cooperate to establish norms to prevent the spread of technology that would be destabilizing.
AI for Peace couldgo beyond current efforts by involving the private sector from the get-go and identifying the specific dangers AI presents and the global norms that could prevent those dangers (e.g., what does meaningful human control over smart machines mean in specific contexts?). It would also go beyond Department of Defense initiativesto build norms by encompassing peaceful applications. Finally, it would advance the United States historic role as a leader in forging global consensus.
The Dangers of Artificial Intelligence
The uncertainty surrounding AIs long-term possibilities makes it difficult to regulate, but the potential for chaos is more tangible. It could be used to inflict catastrophic kinetic, military, and political damage. AI-assisted weapons are essentially very smart machines that can find hidden targets more quickly and attack them with greater precision than conventional computer-guided weapons.
As AI becomes incorporated into societys increasingly autonomous information backbone, it could also pose a risk of catastrophic accidents. If AI becomes pervasive, banking, power generation, and hospitals will be even more vulnerable to cyberattack. Some speculate than an AI superintelligence could develop a strategic calculating ability so superior that it destabilizes arms control efforts.
There are limits to the nuclear governance analogy. Whereas nuclear technology was once the purview only of the most powerful states, the private sector leads AI innovation. States could once agree to safeguard nuclear secrets, but AI is already everywhere including in every smartphone on the planet.
Its ubiquity shows its appeal, but the same ubiquity lowers the cost of sowing disorder. A recent study found that for less than $10 anyone could create a fake United Nations speech credible enough to be shared on the internet as real. Controlling the most dangerous uses of technology will require private sector initiatives to build safety into AI systems.
Scientists Speak Out
In 2015, Stephen Hawking, Peter Norvig, and others signed an open letter calling for more research on AIs impacts on society. The letter recognized the tremendous benefits AI could bring for human health and happiness, but also warned of unpredictable dangers. The key issue is that humans should remain in control. More than 700 AI and robotics researchers signed the 2017 Asilomar AI Principles calling for shared responsibility and warning against an AI arms race.
The path to governing nuclear technology followed a similar pattern of exchange between scientists and policymakers. Around 1943, Niels Bohr, a famous Danish physicist, made the case that since scientists created nuclear weapons, they should take responsibility for efforts to control the technology. Two years later, after the first use of nuclear weapons, the United States created a committee to deliberate about whether the weapons should become central to U.S. military strategy, or whether the country should forego them and avoid a costly arms race. The Acheson-Lilienthal committees proposal to put nuclear weapons under shared international control failed to gain support, but it was one step in a consensus-building process. The U.S. Department of Defense, Department of State, and other agencies developed their own perspectives, and U.N. negotiations eventually produced the Treaty on the Non-Proliferation of Nuclear Weapons (NPT). Since entering into force in 1970, it has become the most widely subscribed arms control treaty in history with a total of 191 signatory states.
We are in the Acheson-Lilienthal age of governing AI. Neither disarmament nor shared control is feasible in the short term, and the best hope is to limit risk. The NPT was created with the principles of non-possession and non-transfer of nuclear weapons material and technology in mind, but AI code is too diffuse and too widely available for those principles to be the lodestar of AI governance.
What Norms Do We Want?
What then does nonproliferation look like in AI? What could or should be prohibited? One popular proposal is a no kill rule for unassisted AI: humans should bear responsibility for military attack.
A current Defense Department directive requires appropriate levels of human judgment in autonomous system attacks aimed at humans. This allows the United States to claim the moral high ground. The next step is to add specificity to what appropriate levels of judgement means in particular classes of technology. For example, greater human control might be proportional to greater potential for lethality. Many of AIs dangers stem from the possibility that it might act through code too complex for humans to understand, or that it might learn so rapidly so as to be outside of human direction and therefore threaten humanity. We must consider how these situations might arise and what could be done to preserve human control. Roboticists say that such existing tools as reinforcement learning and utility functions will not solve the control problem.
An AI system might need to be turned off for maintenance or, crucially, in cases where the AI system poses a threat. Robots often have a red shutdown button in case of emergency, but an AI system might be able learn to turn off its own off switch, which would likely be software rather than a big red button. Google is developing an off switch it terms a kill switch for its applications, and European lawmakers are debating whether and how to make a kill switch mandatory. This may require a different kind of algorithm than currently exists one with safety and interpretability at the core. It is not clear what an off switch means in military terms, but American-Soviet arms control faced a similar problem. Yet arms control proceeded though technical negotiations that established complex yet robust command and control systems.
Building International Consensus
The NPT was preceded by a quarter century of deliberation and consensus building. We are at the beginning of that timeline for AI. The purpose of treaties and consensus building is to limit the risks of dangerous technology by convincing countries that restraint is in the interests of mankind and their own security.
Nuclear nonproliferation agreements succeeded because the United States and the Soviet Union convinced non-nuclear nations that limiting the spread of nuclear weapons was in their interest even if it meant renouncing weapons while other countries still had them. In 1963, John F. Kennedy asked what it would mean to have nuclear weapons in so many hands, in the hands of countries large and small, stable and unstable, responsible and irresponsible, scattered throughout the world. The answer was that more weapons in the hands of more countries would increase the chance of accident, proxy wars, weak command and control systems, and first strikes. The threat of nuclear weapons in the hands of regional rivals could be more destabilizing than in the hands of the superpowers. We do not yet know if the same is true for AI, but we should investigate the possibility.
Access to Peaceful Technology
It is a tall order to ask countries to buy into a regime that limits their development of a powerful new technology. Nuclear negotiations offered the carrot of eventual disarmament, but what disarmament means in the AI context is not clear. However, the principle that adopting restrictions on AI weapons should be linked to access to the benefits of AI for peaceful uses and security cooperation could apply. Arms control negotiator William Foster wrote in 1967 that the NPT treaty would stimulate widespread, peaceful development of nuclear energy. Why not promise to share peaceful and humanitarian applications of AI for agriculture and medicine, for example with countries that agree to participate in global controls?
The foundation of providing access to peaceful nuclear technology in exchange for monitoring materials and technology led to the development of a system of inspections known as safeguards. These were controversial and initially not strong enough to prevent the spread of nuclear weapons, but they took hold over time. A regime for AI inspection and verification will take time to emerge.
As in the nuclear sphere, the first step is to build consensus and identify what other nations want and where common interest lies. AI exists in lines of code, not molecules of uranium. For publicly available AI code, principles of transparency may help mutual inspection. For code that is protected, more indirect measures of monitoring and verification may be devised.
Finally, nuclear arms control and nonproliferation succeeded as part of a larger strategy (including extended deterrence) that provided strategic stability and reassurance to U.S. allies. America and the Soviet Union despite their Cold War competition found common interests in preventing the spread of nuclear weapons. AI strategy goes hand-in-hand with a larger defense strategy.
A New AI for Defense Framework
Once again, the world needs U.S. global leadership this time to prevent an AI arms race, accident, or catastrophic attack. U.N.-led discussions are valuable but overly broad, and the technology has too many military applications for industry alone to lead regulation. Current U.N. talks are preoccupied with discussion of a ban on lethal autonomous weapons. These are sometimes termed killer robots because they are smart machines that can move in the world and make decisions without human control. They cause concern if human beings are not involved in the decision to kill. The speed and scale of AI deployment calls for more nuance than the current U.N. talks can provide, and more involvement by more stakeholders, including national level governments and industry.
As at the dawn of the nuclear age, the United States can build global consensus in the age of AI to reduce risks and make the world safe for one of its leading technologies one thats valuable to U.S. industry and to humanity.
Washington should build a framework for a global consensus on how to govern AI technology that could be weaponized. Private sector participation would be crucial to address governance, as well as how to share peaceful benefits to incentivize participation. The Pentagon, in partnership with private sector technology firms, is a natural leader because of its budget and role in the industrial base.
An AI for Peace program should articulate the dangers of this new technology, principles (e.g. no kill, human control, off switch) to manage the dangers, and a structure to shape the incentives for other states (perhaps a system of monitoring and inspection). Our age is not friendly to new treaties, but we can foster new norms. We can learn from the nuclear age that countries will agree to limit dangerous technology with the promise of peaceful benefits for all.
Patrick S. Roberts is a political scientist at the nonprofit, nonpartisan RAND Corporation.Roberts served as an advisor in the State Departments Bureau of International Security and Nonproliferation, where he worked on the NPT and other nuclear issues.
Image: Nuclear Regulatory Commission
More:
AI for Peace - War on the Rocks
- AI File Extension - Open . AI Files - FileInfo [Last Updated On: June 14th, 2016] [Originally Added On: June 14th, 2016]
- Ai | Define Ai at Dictionary.com [Last Updated On: June 16th, 2016] [Originally Added On: June 16th, 2016]
- ai - Wiktionary [Last Updated On: June 22nd, 2016] [Originally Added On: June 22nd, 2016]
- Adobe Illustrator Artwork - Wikipedia, the free encyclopedia [Last Updated On: June 25th, 2016] [Originally Added On: June 25th, 2016]
- AI File - What is it and how do I open it? [Last Updated On: June 29th, 2016] [Originally Added On: June 29th, 2016]
- Ai - Definition and Meaning, Bible Dictionary [Last Updated On: July 25th, 2016] [Originally Added On: July 25th, 2016]
- ai - Dizionario italiano-inglese WordReference [Last Updated On: July 25th, 2016] [Originally Added On: July 25th, 2016]
- Bible Map: Ai [Last Updated On: August 30th, 2016] [Originally Added On: August 30th, 2016]
- Ai dictionary definition | ai defined - YourDictionary [Last Updated On: August 30th, 2016] [Originally Added On: August 30th, 2016]
- Ai (poet) - Wikipedia, the free encyclopedia [Last Updated On: August 30th, 2016] [Originally Added On: August 30th, 2016]
- AI file extension - Open, view and convert .ai files [Last Updated On: August 30th, 2016] [Originally Added On: August 30th, 2016]
- History of artificial intelligence - Wikipedia, the free ... [Last Updated On: August 30th, 2016] [Originally Added On: August 30th, 2016]
- Artificial intelligence (video games) - Wikipedia, the free ... [Last Updated On: August 30th, 2016] [Originally Added On: August 30th, 2016]
- North Carolina Chapter of the Appraisal Institute [Last Updated On: September 8th, 2016] [Originally Added On: September 8th, 2016]
- Ai Weiwei - Wikipedia, the free encyclopedia [Last Updated On: September 11th, 2016] [Originally Added On: September 11th, 2016]
- Adobe Illustrator Artwork - Wikipedia [Last Updated On: November 17th, 2016] [Originally Added On: November 17th, 2016]
- 5 everyday products and services ripe for AI domination - VentureBeat [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Realdoll builds artificially intelligent sex robots with programmable personalities - Fox News [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- ZeroStack Launches AI Suite for Self-Driving Clouds - Yahoo Finance [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- AI and the Ghost in the Machine - Hackaday [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Why Google, Ideo, And IBM Are Betting On AI To Make Us Better Storytellers - Fast Company [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Roses are red, violets are blue. Thanks to this AI, someone'll fuck you. - The Next Web [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Wearable AI Detects Tone Of Conversation To Make It Navigable (And Nicer) For All - Forbes [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Who Leads On AI: The CIO Or The CDO? - Forbes [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- AI For Matching Images With Spoken Word Gets A Boost From MIT - Fast Company [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- Teach undergrads ethics to ensure future AI is safe compsci boffins - The Register [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- AI is here to save your career, not destroy it - VentureBeat [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- A Heroic AI Will Let You Spy on Your Lawmakers' Every Word - WIRED [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- With a $16M Series A, Chorus.ai listens to your sales calls to help your team close deals - TechCrunch [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- Microsoft AI's next leap forward: Helping you play video games - CNET [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- Samsung Galaxy S8's Bixby AI could beat Google Assistant on this front - CNET [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- 3 common jobs AI will augment or displace - VentureBeat [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- Stephen Hawking and Elon Musk endorse new AI code - Irish Times [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- SumUp co-founders are back with bookkeeping AI startup Zeitgold - TechCrunch [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Five Trends Business-Oriented AI Will Inspire - Forbes [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- AI Systems Are Learning to Communicate With Humans - Futurism [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Pinterest uses AI and your camera to recommend pins - Engadget [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Chinese Firms Racing to the Front of the AI Revolution - TOP500 News [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Real life CSI: Google's new AI system unscrambles pixelated faces - The Guardian [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- AI could transform the way governments deliver public services - The Guardian [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Amazon Is Humiliating Google & Apple In The AI Wars - Forbes [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- What's Still Missing From The AI Revolution - Co.Design (blog) [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Legaltech 2017: Announcements, AI, And The Future Of Law - Above the Law [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Can AI make Facebook more inclusive? - Christian Science Monitor [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- How a poker-playing AI could help prevent your next bout of the flu - ExtremeTech [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Dynatrace Drives Digital Innovation With AI Virtual Assistant - Forbes [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- AI and the end of truth - VentureBeat [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Taser bought two computer vision AI companies - Engadget [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Google's DeepMind pits AI against AI to see if they fight or cooperate - The Verge [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- The Coming AI Wars - Huffington Post [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Is President Trump a model for AI? - CIO [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- Who will have the AI edge? - Bulletin of the Atomic Scientists [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- How an AI took down four world-class poker pros - Engadget [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- We Need a Plan for When AI Becomes Smarter Than Us - Futurism [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- See how old Amazon's AI thinks you are - The Verge [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- Ford to invest $1 billion in autonomous vehicle tech firm Argo AI - Reuters [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- Zero One: Are You Ready for AI? - MSPmentor [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- Ford bets $1B on Argo AI: Why Silicon Valley and Detroit are teaming up - Christian Science Monitor [Last Updated On: February 12th, 2017] [Originally Added On: February 12th, 2017]
- Google Test Of AI's Killer Instinct Shows We Should Be Very Careful - Gizmodo [Last Updated On: February 12th, 2017] [Originally Added On: February 12th, 2017]
- Google's New AI Has Learned to Become "Highly Aggressive" in Stressful Situations - ScienceAlert [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- An artificially intelligent pathologist bags India's biggest funding in healthcare AI - Tech in Asia [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- Ford pledges $1bn for AI start-up - BBC News [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- Dyson opens new Singapore tech center with focus on R&D in AI and software - TechCrunch [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- How to Keep Your AI From Turning Into a Racist Monster - WIRED [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- How Chinese Internet Giant Baidu Uses AI And Machine Learning - Forbes [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- Humans engage AI in translation competition - The Stack [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Watch Drive.ai's self-driving car handle California city streets on a ... - TechCrunch [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Cryptographers Dismiss AI, Quantum Computing Threats - Threatpost [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Is AI making credit scores better, or more confusing? - American Banker [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- AI and Robotics Trends: Experts Predict - Datamation [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- IoT And AI: Improving Customer Satisfaction - Forbes [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- AI's Factions Get Feisty. But Really, They're All on the Same Team - WIRED [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Elon Musk: Humans must become cyborgs to avoid AI domination - The Independent [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Facebook Push Into Video Allows Time To Catch Up On AI Applications - Investor's Business Daily [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Defining AI, Machine Learning, and Deep Learning - insideHPC [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- AI Predicts Autism From Infant Brain Scans - IEEE Spectrum [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- The Rise of AI Makes Emotional Intelligence More Important - Harvard Business Review [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Google's AI Learns Betrayal and "Aggressive" Actions Pay Off - Big Think [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- AI faces hype, skepticism at RSA cybersecurity show - PCWorld [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- New AI Can Write and Rewrite Its Own Code to Increase Its Intelligence - Futurism [Last Updated On: February 17th, 2017] [Originally Added On: February 17th, 2017]