Four Things to Consider on the Future of AI-enabled Deterrence – Lawfare

Posted: July 25, 2021 at 3:26 pm

Editors Note: The impact of artificial intelligence on security policy has many dimensions, and one of the most important will be how it shapes deterrence. Artificial intelligence complexifies many of the components of successful deterrence, such as communicating a threat clearly and being prepared for adversary adaptation. Alex Wilner, Casey Babb and Jessica Davis of Carleton University unpack the relationship between artificial intelligence and deterrence, explaining some of the likely challenges and offering suggestions for how to improve deterrence.

Daniel Byman

***

Analysts and policymakers alike believe artificial intelligence (AI) may fundamentally reshape security. It is now vital to understand its implications for deterrence and coercion.Over the past three years, with funding from Canadas Department of National Defence through its Innovation for Defence Excellence and Security (IDEaS) program, we undertook extensive research to better understand how AI intersects with traditional deterrence theory and practice in both the physical and digital domains. After dozens of interviews and consultations with subject matter experts in the United States, Canada, the United Kingdom, Europe, Israel and elsewhere, we came away with four key insights about AIs potential effect on deterrence. The implications of these findings pose challenges that states will have to reckon with as they integrate AI into their efforts to deter threats ranging from organized criminal activities, to terrorist and cyberattacks, to nuclear conflict and beyond.

AI Poses a Communications Dilemma

First, deterrence does not usually happen on its own. It is the result of countries actively (and occasionally passively) signaling or communicating their intentions, capabilities and expectations to would-be adversaries. Several experts we spoke with stressed that the prerequisites of communication and signaling pose a particular limitation in applying AI to deterrence. Darek Saunders, with the European Border and Coast Guard Agency, noted that no oneno government department or agencyis making public how they are detecting certain things, so threat actors will not know if it is AI, good intelligence or just bad luck that has put them in jeopardy or forced them to divert their efforts elsewhere. Unless governments are willing to publicly clarify how they use AI to monitor certain forms of behavior, it will be nearly impossible to attribute what, if any, utility AI has had in deterring adversaries. Joe Burton with the New Zealand Institute for Security and Crime Science drew parallels with the Cold War to illustrate the limitations of communication in terms of AI-enabled coercion: Deterrence was effective because we knew what a nuclear explosion looked like. If you cant demonstrate what an AI capability looks like, its not going to have a significant deterrence capability. Furthermore, many (if not all) capabilities related to AI require sensitive data, an asset most governments rarely advertise to friends or foes.

But heres the rub: By better communicating AI capability to strengthen deterrence, countries risk inadvertently enabling an adversary to leverage that awareness to circumvent the capability to avoid detection, capture or defeat. With AI, too much information may diminish deterrence. As Richard Pinch, former strategic adviser for mathematics research at the Government Communications Headquarters (GCHQ), explained to us, If we let bad actors know about our capabilities, we would then be educating the adversary on what to watch for.

AI-enabled deterrence is ultimately about finding the right balance between communicating capabilities and safeguarding those capabilities. As Anthony Finkelstein, then-chief scientific adviser for national security in the United Kingdom concluded, You want to ensure your actual systems and technology are not known, at least not technically, while ensuring that the presence of these assets is known. More practical research is needed on developing the AI equivalent of openly testing a nuclear or anti-satellite weapona demonstration of capability and a signaling of force and intent that does not undermine the technology itself.

AI Is Part of a Process

Second, AI is just one piece of a much larger process at work. The purpose of AI-driven security analysis is not chiefly about immediate deterrence but rather about identifying valuable irregularities in adversarial behavior and using that information to inform a broader general deterrence perspective. Using European border security as an example, Dave Palmer, formerly with GCHQ and MI5, explained that its unlikely that AI will deter criminals at the borders on the spot or that you can have a single point in time where the technology will work and you will be able to capture someone doing something they shouldnt be. Instead, it is more likely that AI will allow border security agencies to better identify unlawful behaviour, providing that information downstream to other organizations, agencies or governments that can use it to inform a more comprehensive effort to stop malicious activity. In a nutshell, AI alone might not deter, but AI-enabled information captured within a larger process at work might make some behavior more challenging and less likely to occur.

AI May Lead to Displacement and Adaptation

Third, successfully deterring activities within one domain may invite other unwanted activities within anotherfor example, if AI enables greater deterrence in the maritime domain, it may lead an adversary to pivot elsewhere and prioritize new cyber operations to achieve their objectives. In deterrence theoryespecially as conceived of in criminology and terrorism studiesthis phenomenon is usually understood as displacement (i.e., displacing criminal activities in one domain, or of a particular nature, for another).

Putting this into a larger context, the nature of AIs technological development suggests its application to coercion will invite adversarial adaptation, innovation and mimicry. AI begets AI; new advancements and applications of AI will prompt adversaries to develop technological solutions to counter them. The more sophisticated the adversary, the more sophisticated their AI countermeasures will be, and by association, their countercoercion efforts. State-based adversaries and sophisticated non-state actors, for instance, might manipulate or mislead the technology and data on which these AI-based capabilities rely. As an illustration, when the European Union sought to deter human smuggling via the Mediterranean into Italy and Spain by using AI to increase interdictions, authorities soon realized that smugglers were purposefully sending out misleading cellular data to misinform and manipulate the technology and AI used to track and intercept them.

From the perspective of deterrence theory, adversarial adaptation can be interpreted in different ways. It is a form of success, in that adaptation entails a cost on an adversary and diminishes the feasibility of some activities, therefore augmenting deterrence by denial. But it can also be seen as a failure because adaptation invites greater adversarial sophistication and new types of malicious activity.

How AI Is Used Will Depend on Ethics

Ethics and deterrence do not regularly meet, though a lively debate did emerge during the Cold War as to the morality of nuclear deterrence. AI-enabled deterrence, conversely, might altogether hinge on ethics. Several interviewees discussed concerns about how AI could be used within society. For example, Thorsten Wetzling with the Berlin-based think tank Stiftung Neue Verantwortung suggested that in some instances European countries are approaching AI from entirely different perspectives as a result of their diverging history and strategic culture. Germans appear especially conscious of the potential societal implications of AI, emerging technology, and government reach because of the countrys history of authoritarian rule; as a result, Germany is particularly drawn to regulation and oversight. On the opposite end of the spectrum, Israel tends to be more comfortable using emerging technology and certain forms of AI for national security purposes. Indeed, Israels history of conflict informs its continual push to enhance and improve security and defense. In national security uses, one Israeli interviewee noted, there is little resistance to integrating new technologies like AI.

Other interviewees couched this larger argument as one centered around democracy, rather than strategic culture. Simona Soare with the International Institute for Strategic Studies argued that there are differences in the utility AI has when it comes to deterrence between democracies and non-democracies. One European interviewee noted, for illustration, that any information derived from AI is not simply applied; it is screened through multiple individuals, who determine what to do with the data, and whether or not to take action using it. As AI is further integrated into European security and defense, it is likely that security and intelligence agencies, law enforcement, and militaries will be pressed to justify their use of AI. Ethics may drive that process of justification and, as a result, may figure prominently in AIs use in deterrence and coercion. In China, however, AI has enabled the government to effectively create what a U.S. official has referred to as an open-air prison in Xinjiang province, where the regime has developed, tested and applied a range of innovative technologies to support the countrys discriminatory national surveillance system. The government has leveraged everything from predictive analytics to advanced facial recognition technology designed to identify peoples ethnicities in order to maintain its authoritarian grip. As Ross Andersen argued in the Atlantic in 2020, [President] Xi [Jinping] wants to use artificial intelligence to build a digital system of social control, patrolled by precog algorithms that identify dissenters in real time. Of particular concern to the United States, Canada and other states is the way these technologies have been used to target and oppress Chinas Uighur and other minority populations. Across these examples, the use and utility of AI in deterrence and coercion will be partially informed by the degree to which ethics and norms play a role.

The Future of Deterrence

The concept of deterrence is flexible and has responded to shifting geopolitical realities and emerging technologies. This evolution has taken place within distinct waves of scholarship; a fifth wave is now emerging, with AI (and other technologies) a prevailing feature. In practice, new and concrete applications of deterrence usually follow advancements in scholarship. Lessons drawn from the emerging empirical study of AI-enabled deterrence need to be appropriately applied and integrated into strategy, doctrine and policy. Much still needs to be done. For the United States, AI capabilities need to be translated into a larger process of coercion by way of signaling both technological capacity and political intent that avoids adversarial adaptation but also meets the diverging ethical requirements of U.S. allies. No small feat. However, a failure to think creatively about how best to leverage AI toward deterrence across the domains of warfare, cybersecurity and national security more broadly leaves the United States vulnerable to novel and surprising innovations in coercion introduced by challengers and adversaries. The geopolitics of AI includes a deterrence dimension.

Excerpt from:

Four Things to Consider on the Future of AI-enabled Deterrence - Lawfare

Related Posts