This article was originally published for the German Federal Foreign Offices Artificial Intelligence and Weapons of Mass Destruction Conference 2024, held on the 28th of June, and can be read here. You can also read The implications of AI in nuclear decision-making, by ELN Policy Fellow Alice Saltini, who will be speaking on a panel at the conference.
Artificial intelligence (AI) is a catalyst for many trends that increase the salience of nuclear, biological or chemical weapons of mass destruction (WMD). AI can facilitate and speed up the development or manufacturing of WMD or precursor technologies. With AI assistance, those who currently lack the necessary knowledge to produce fissile materials or toxic substances can acquire WMD capabilities. AI itself is of proliferation concern. As an intangible technology, it spreads easily, and its diffusion is difficult to control through supply-side mechanisms, such as export controls. At the intersection of nuclear weapons and AI, there are concerns about rising risks of inadvertent or intentional nuclear weapons use, reduced crisis stability and new arms races.
To be sure, AI also has beneficial applications and can reduce WMD-related risks. AI can make transparency and verification instruments more effective and efficient because of its ability to process immense amounts of data and detect unusual patterns, which may indicate noncompliant behaviour. AI can also improve situational awareness in crisis situations.
While efforts to explore and exploit the military dimension of AI are moving ahead rapidly, these beneficial dimensions of the AI-WMD intersection remain under-researched and under-used.
The immediate challenge is to build guardrails around the integration of AI into the WMD sphere and to slow down the incorporation of AI into research, development, production, and planning for nuclear, biological and chemical weapons. Meanwhile, governments should identify risk mitigation measures and, at the same time, intensify their search for the best approaches to capitalise on the beneficial applications of AI in controlling WMD. Efforts to ensure that the international community is able to govern this technology rather than let it govern ushave to address challenges at three levels at the AI and WMD intersection.
First, AI can facilitate the development of biological, chemical or nuclear weapons by making research, development and production faster and more efficient. This is true even for old technologies like fissile material production, which remains expensive and requires large-scale industrial facilities. AI can help to optimise uranium enrichment or plutonium separation, two key processes in any nuclear weapons programme.
The connection between AI and chemistry and biochemistry is particularly worrying. The Director General of the Organisation for the Prohibition of Chemical Weapons (OPCW) has warned of the potential risks that artificial intelligence-assisted chemistry may pose to the Chemical Weapons Convention and of the ease and speed with which novel routes to existing toxic compounds can be identified.This creates serious new challenges for the control of toxic substances and their precursors.
Similar concerns exist with regard to biological weapons. Synthetic biology is in itself a dynamic field.But AI puts the development of novel chemical or biological agents through such new technologies on steroids. Rather than going through lengthy and costly lab experiments, AI can predict the biological effects of known and even unknown agents. Amuch-cited paper by Filippa Lentzos and colleaguesdescribes an experiment during which an AI, in less than six hours and running on a standard hardware configuration, generated forty thousand molecules that scored within our desired threshold, meaning that these agents were likely more toxic than publicly known chemical warfare agents.
Second,AI could ease access to nuclear, biological and chemical weapons by illicit actors by giving advice on how to develop and produce WMD or relevant technologies from scratch.
To be sure, current commercial AI providers have instructed their AI models not to answer questions on how to build WMD or related technologies. But such limits will not remain impermeable. And in future, the problem may not be so much preventing the misuse of existing AI models but the proliferation of AI models or the technologies that can be used to build them. Only a fraction of all spending on AI is invested in the safety and security of such models.
Third, the integration of AI into the WMD sphere can also lower the threshold for the use of nuclear, biological or chemical weapons. Thus, all nuclear weapon stateshave begun to integrate AI into their nuclear command, control, communication and information (NC3I) infrastructure. The ability of AI models to analyse large chunks of data at unprecedented speedscan improve situational awareness and help warn, for example, of incoming nuclear attacks. But at the same time AI may also be used to optimise military strike options. Because of the lack of transparency around AI integration, fears that adversaries may be intent on conducting a disarming strike with AI assistance can increase, setting up a race to the bottom in nuclear decision-making.
In a crisis situation, overreliance on AI systems that are unreliable or working with faulty data may create additional problems. Data may be incomplete or may have been manipulated. AI models themselves are not objective. These problems are structural and thus not easily fixed.A UNIDIR study, for example, found that gender norms and bias can be introduced into machine learning throughout its life cycle. Another inherent risk is that AI systems designed and trained for military uses are biased towards war-fighting rather than war avoidance, which would make de-escalation in a nuclear crisis much more difficult.
The consensus among nuclear weapons states that a human always has to stay in the loop before a nuclear weapon is launched, is important, but it remains a problem that the understanding of human control may differ significantly.
It would be a fools errand to try to slow down AIs development. But we need to decelerate AIs convergence with the research, development, production, and military planning related to WMD. It must also be possible to prevent spillover from AIs integration into the conventional military sphere to applications leading to nuclear, biological, and chemical weapons use.
Such deceleration and channelling strategies can build on some universal norms and prohibitions. But they will also have to be tailored to the specific regulative frameworks, norms and patterns regulating nuclear, biological and chemical weapons. Thezero draft of the Pact for the Future, to be adopted at the September 2024Summit of the Future, points in the right direction by suggesting a commitment by the international community to developing norms, rules and principles on the design, development and use of military applications of artificial intelligence through a multilateral process, while also ensuring engagement with stakeholders from industry, academia, civil society and other sectors.
Fortunately, efforts to improve AI governance on WMD do not need to start from scratch. At the global level, the prohibitions of biological and chemical weapons enshrined in the Biological and Chemical Weapons Conventions are all-encompassing: the general purpose criterion prohibits all chemical and biological agents that are not used peacefully, whether AI comes into play or not. But AI may test these prohibitions in various ways, including by merging biotechnology and chemistry seamlessly with other novel technologies. It is, therefore, essential the OPCW monitors these developments closely.
International Humanitarian Law (IHL) implicitly establishes limits on the military application of AI by prohibiting the indiscriminate and disproportionate use of force in war. The Group of Governmental Experts (GGE) on Lethal Autonomous Weapons under the Convention on Certain Conventional Weapons (CCW)is doing important work by attempting to spell out what the IHL requirements mean for weapons that act without human control. These discussions will,mutatis mutandis, also be relevant for any nuclear, biological or chemical weapons that would be reliant on AI functionalities that reduce human control.
Shared concerns around the risks of AI and WMD have triggered a range of UN-based initiatives to promote norms around responsible use. The legal, ethical and humanitarian questions raised at the April 2024Vienna Conference on Autonomous Weapons Systems are likely to inform debates and decisions around limits on AI integration into WMD development and employment, and particularly nuclear weapons use. After all, similar pressures to shorten decision times and improve the autonomy of weapons systems apply to nuclear as well as conventional weapons.
From a regulatory point of view, it is advantageous that the market for AI-related products is still highly concentrated around a few big players. It is positive that some of the countries with the largest AI companies are also investing in the development of norms around responsible use of AI. It is obvious that these companies have agency and, in some cases, probably more influence on politics than small states.
TheBletchley Declarationadopted at the November 2023 AI Safety Summit in the UK, for example, highlighted the particular safety risks that arise at the frontier of AI. These could include risks that may arise from potential intentional misuse or unintended issues of control relating to alignment with human intent. The summits on Responsible Artificial Intelligence in the Military Domain (REAIM) are anothereffort at coalition building around military AI that could help to establish the rules of the game.
ThePolitical Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, agreed on in Washington in September 2023, confirmed important principles that also apply to the WMD sphere, including the applicability of international law and the need to implement appropriate safeguards to mitigate risks of failures in military AI capabilities. One step in this direction would be for the nuclear weapon states to conduct so-called failsafe reviewsthat would aim to comprehensively evaluate how control of nuclear weapons can be ensured at all times, even when AI-based systems are incorporated.
All such efforts could and should be building blocks that can be incorporated into a comprehensive governance approach. Yet, the risks around AI leading to increased risk of nuclear weapons use are most pressing. Artificial intelligence is not the only emerging and disruptive technology affecting international security.Space warfare, cyber, hypersonic weapons, and quantum are all affecting nuclear stability. It is, therefore, particularly important that nuclear weapon states amongst themselves build a better understanding and confidence about the limits of AI integration into NC3I.
An understanding between China and the United States on guardrails around military misuse of AI would be the single most important measure to slow down the AI race. The fact that Presidents Xi Jinping and Joe Biden in November 2023 agreed that China and the United States have broad common interests, including on artificial intelligence, and to intensify consultations on that and other issues, was a much-needed sign of hope. Although since then China has been hesitating to actually engagein such talks.
Meanwhile, relevant nations can lead by example when considering the integration of AI into the WMD realm. This concerns, first of all, the nuclear weapon states which can demonstrate responsible behaviour by pledging, for example, that they would not use AI to interfere with the nuclear command, control and communication systems of their adversaries. All states should also practice maximum transparency when conducting experiments around the use of AI for biodefense activities because such activities can easily be mistaken for offensive work. Finally, the German governments pioneering role in looking at the impact of new and emerging technologies on arms control has to be recognised. Its Rethinking Arms Control conferences, including the most recent conference on AI and WMD on June 28 in Berlin with key contributors such as the Director General of the OPCW, are particularly important. Such meetings can systematically and consistently investigate the AI-WMD interplay in a dialogue between experts and practitioners. If they can agree on what guardrails and speed bumps are needed, an important step toward effective governance of AI in the WMD sphere has been taken.
The opinions articulated above represent the views of the author(s) and do not necessarily reflect the position of the European Leadership Network or any of its members. The ELNs aim is to encourage debates that will help develop Europes capacity to address the pressing foreign, defence, and security policy challenges of our time.
Image credit: Free ai generated art image, public domain art CC0 photo. Mixed with Wikimedia Commons / Fastfission~commonswiki
Originally posted here:
- Classic reasoning systems like Loom and PowerLoom vs. more modern systems based on probalistic networks [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Using Amazon's cloud service for computationally expensive calculations [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Software environments for working on AI projects [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- New version of my NLP toolkit [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Semantic Web: through the back door with HTML and CSS [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Java FastTag part of speech tagger is now released under the LGPL [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Defining AI and Knowledge Engineering [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Great Overview of Knowledge Representation [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Something like Google page rank for semantic web URIs [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- My experiences writing AI software for vehicle control in games and virtual reality systems [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- The URL for this blog has changed [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- I have a new page on Knowledge Management [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- N-GRAM analysis using Ruby [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Good video: Knowledge Representation and the Semantic Web [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Using the PowerLoom reasoning system with JRuby [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Machines Like Us [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- RapidMiner machine learning, data mining, and visualization tool [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- texai.org [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- NLTK: The Natural Language Toolkit [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- My OpenCalais Ruby client library [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Ruby API for accessing Freebase/Metaweb structured data [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Protégé OWL Ontology Editor [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- New version of Numenta software is available [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Very nice: Elsevier IJCAI AI Journal articles now available for free as PDFs [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Verison 2.0 of OpenCyc is available [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- What’s Your Biggest Question about Artificial Intelligence? [Article] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Minimax Search [Knowledge] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Decision Tree [Knowledge] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- More AI Content & Format Preference Poll [Article] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- New Planners Solve Rescue Missions [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Neural Network Learns to Bluff at Poker [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Pushing the Limits of Game AI Technology [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Mining Data for the Netflix Prize [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Interview with Peter Denning on the Principles of Computing [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Decision Making for Medical Support [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Neural Network Creates Music CD [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- jKilavuz - a guide in the polygon soup [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Artificial General Intelligence: Now Is the Time [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Apply AI 2007 Roundtable Report [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- What Would You do With 80 Cores? [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Software Finds Learning Language Child's Play [News] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Artificial Intelligence in Games [Article] [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Artificial Intelligence Resources [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Alan Turing: Mathematical Biologist? [Last Updated On: April 25th, 2012] [Originally Added On: April 25th, 2012]
- BBC Horizon: The Hunt for AI ( Artificial Intelligence ) - Video [Last Updated On: April 30th, 2012] [Originally Added On: April 30th, 2012]
- Can computers have true artificial intelligence" Masonic handshake" 3rd-April-2012 - Video [Last Updated On: April 30th, 2012] [Originally Added On: April 30th, 2012]
- Kevin B. Korb - Interview - Artificial Intelligence and the Singularity p3 - Video [Last Updated On: April 30th, 2012] [Originally Added On: April 30th, 2012]
- Artificial Intelligence - 6 Month Anniversary - Video [Last Updated On: April 30th, 2012] [Originally Added On: April 30th, 2012]
- Science Breakthroughs [Last Updated On: April 30th, 2012] [Originally Added On: April 30th, 2012]
- Hitman: Blood Money - Part 49 - Stupid Artificial Intelligence! - Video [Last Updated On: April 30th, 2012] [Originally Added On: April 30th, 2012]
- Research Members Turned Off By HAARP Artificial Intelligence - Video [Last Updated On: April 30th, 2012] [Originally Added On: April 30th, 2012]
- Artificial Intelligence Lecture No. 5 - Video [Last Updated On: April 30th, 2012] [Originally Added On: April 30th, 2012]
- The Artificial Intelligence Laboratory, 2012 - Video [Last Updated On: April 30th, 2012] [Originally Added On: April 30th, 2012]
- Charlie Rose - Artificial Intelligence - Video [Last Updated On: April 30th, 2012] [Originally Added On: April 30th, 2012]
- Expert on artificial intelligence to speak at EPIIC Nights dinner [Last Updated On: May 4th, 2012] [Originally Added On: May 4th, 2012]
- Filipino software engineers complete and best thousands on Stanford’s Artificial Intelligence Course [Last Updated On: May 4th, 2012] [Originally Added On: May 4th, 2012]
- Vodafone xone™ Hackathon Challenges Developers and Entrepreneurs to Build a New Generation of Artificial Intelligence ... [Last Updated On: May 4th, 2012] [Originally Added On: May 4th, 2012]
- Rocket Fuel Packages Up CPG Booster [Last Updated On: May 4th, 2012] [Originally Added On: May 4th, 2012]
- 2 Filipinos finishes among top in Stanford’s Artificial Intelligence course [Last Updated On: May 5th, 2012] [Originally Added On: May 5th, 2012]
- Why Your Brain Isn't A Computer [Last Updated On: May 5th, 2012] [Originally Added On: May 5th, 2012]
- 2 Pinoy software engineers complete Stanford's AI course [Last Updated On: May 7th, 2012] [Originally Added On: May 7th, 2012]
- Percipio Media, LLC Proudly Accepts Partnership With MIT's Prestigious Computer Science And Artificial Intelligence ... [Last Updated On: May 10th, 2012] [Originally Added On: May 10th, 2012]
- Google Driverless Car Ok'd by Nevada [Last Updated On: May 10th, 2012] [Originally Added On: May 10th, 2012]
- Moving Beyond the Marketing Funnel: Rocket Fuel and Forrester Research Announce Free Webinar [Last Updated On: May 10th, 2012] [Originally Added On: May 10th, 2012]
- Rocket Fuel Wins 2012 San Francisco Business Times Tech & Innovation Award [Last Updated On: May 13th, 2012] [Originally Added On: May 13th, 2012]
- Internet Week 2012: Rocket Fuel to Speak at OMMA RTB [Last Updated On: May 16th, 2012] [Originally Added On: May 16th, 2012]
- How to Get the Most Out of Your Facebook Ads -- Rocket Fuel's VP of Products, Eshwar Belani, to Lead MarketingProfs ... [Last Updated On: May 16th, 2012] [Originally Added On: May 16th, 2012]
- The Digital Disruptor To Banking Has Just Gone International [Last Updated On: May 16th, 2012] [Originally Added On: May 16th, 2012]
- Moving Beyond the Marketing Funnel: Rocket Fuel Announce Free Webinar Featuring an Independent Research Firm [Last Updated On: May 23rd, 2012] [Originally Added On: May 23rd, 2012]
- MASA Showcases Latest Version of MASA SWORD for Homeland Security Markets [Last Updated On: May 23rd, 2012] [Originally Added On: May 23rd, 2012]
- Bluesky Launches Drones for Aerial Surveying [Last Updated On: May 23rd, 2012] [Originally Added On: May 23rd, 2012]
- Artificial Intelligence: What happened to the hunt for thinking machines? [Last Updated On: May 25th, 2012] [Originally Added On: May 25th, 2012]
- Bubble Robots Move Using Lasers [VIDEO] [Last Updated On: May 25th, 2012] [Originally Added On: May 25th, 2012]
- UHV assistant professors receive $10,000 summer research grants [Last Updated On: May 27th, 2012] [Originally Added On: May 27th, 2012]
- Artificial intelligence: science fiction or simply science? [Last Updated On: May 28th, 2012] [Originally Added On: May 28th, 2012]
- Exetel taps artificial intelligence [Last Updated On: May 29th, 2012] [Originally Added On: May 29th, 2012]
- Software offers brain on the rain [Last Updated On: May 29th, 2012] [Originally Added On: May 29th, 2012]
- New Dean of Science has high hopes for his faculty [Last Updated On: May 30th, 2012] [Originally Added On: May 30th, 2012]
- Cognitive Code Announces "Silvia For Android" App [Last Updated On: May 31st, 2012] [Originally Added On: May 31st, 2012]
- A Rat is Smarter Than Google [Last Updated On: June 5th, 2012] [Originally Added On: June 5th, 2012]