In January 2017, a group of artificial intelligence researchers gathered at the Asilomar Conference Grounds in California and developed 23 principles for artificial intelligence, which was later dubbed the Asilomar AI Principles. The sixth principle states that AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible. Thousands of people in both academia and the private sector have since signed on to these principles, but, more than three years after the Asilomar conference, many questions remain about what it means to make AI systems safe and secure. Verifying these features in the context of a rapidly developing field and highly complicated deployments in health care, financial trading, transportation, and translation, among others, complicates this endeavor.
Much of the discussion to date has centered on how beneficial machine learning algorithms may be for identifying and defending against computer-based vulnerabilities and threats by automating the detection of and response to attempted attacks.1 Conversely, concerns have been raised that using AI for offensive purposes may make cyberattacks increasingly difficult to block or defend against by enabling rapid adaptation of malware to adjust to restrictions imposed by countermeasures and security controls.2 These are also the contexts in which many policymakers most often think about the security impacts of AI. For instance, a 2020 report on Artificial Intelligence and UK National Security commissioned by the U.K.s Government Communications Headquarters highlighted the need for the United Kingdom to incorporate AI into its cyber defenses to proactively detect and mitigate threats that require a speed of response far greater than human decision-making allows.3
A related but distinct set of issues deals with the question of how AI systems can themselves be secured, not just about how they can be used to augment the security of our data and computer networks. The push to implement AI security solutions to respond to rapidly evolving threats makes the need to secure AI itself even more pressing; if we rely on machine learning algorithms to detect and respond to cyberattacks, it is all the more important that those algorithms be protected from interference, compromise, or misuse. Increasing dependence on AI for critical functions and services will not only create greater incentives for attackers to target those algorithms, but also the potential for each successful attack to have more severe consequences.
Increasing dependence on AI for critical functions and services will not only create greater incentives for attackers to target those algorithms, but also the potential for each successful attack to have more severe consequences.
This policy brief explores the key issues in attempting to improve cybersecurity and safety for artificial intelligence as well as roles for policymakers in helping address these challenges. Congress has already indicated its interest in cybersecurity legislation targeting certain types of technology, including the Internet of Things and voting systems. As AI becomes a more important and widely used technology across many sectors, policymakers will find it increasingly necessary to consider the intersection of cybersecurity with AI. In this paper, I describe some of the issues that arise in this area, including the compromise of AI decision-making systems for malicious purposes, the potential for adversaries to access confidential AI training data or models, and policy proposals aimed at addressing these concerns.
One of the major security risks to AI systems is the potential for adversaries to compromise the integrity of their decision-making processes so that they do not make choices in the manner that their designers would expect or desire. One way to achieve this would be for adversaries to directly take control of an AI system so that they can decide what outputs the system generates and what decisions it makes. Alternatively, an attacker might try to influence those decisions more subtly and indirectly by delivering malicious inputs or training data to an AI model.4
For instance, an adversary who wants to compromise an autonomous vehicle so that it will be more likely to get into an accident might exploit vulnerabilities in the cars software to make driving decisions themselves. However, remotely accessing and exploiting the software operating a vehicle could prove difficult, so instead an adversary might try to make the car ignore stop signs by defacing them in the area with graffiti. Therefore, the computer vision algorithm would not be able to recognize them as stop signs. This process by which adversaries can cause AI systems to make mistakes by manipulating inputs is called adversarial machine learning. Researchers have found that small changes to digital images that are undetectable to the human eye can be sufficient to cause AI algorithms to completely misclassify those images.5
An alternative approach to manipulating inputs is data poisoning, which occurs when adversaries train an AI model on inaccurate, mislabeled data. Pictures of stop signs that are labeled as being something else so that the algorithm will not recognize stop signs when it encounters them on the road is an example of this. This model poisoning can then lead an AI algorithm to make mistakes and misclassifications later on, even if an adversary does not have access to directly manipulate the inputs it receives.6 Even just selectively training an AI model on a subset of correctly labeled data may be sufficient to compromise a model so that it makes inaccurate or unexpected decisions.
These risks speak to the need for careful control over both the training datasets that are used to build AI models and the inputs that those models are then provided with to ensure security of machine-learning-enabled decision-making processes. However, neither of those goals are straightforward. Inputs to their machine learning systems, in particular, are often beyond the scope of control of AI developerswhether or not there will be graffiti on street signs that computer vision systems in autonomous vehicles encounter, for instance. On the other hand, developers have typically had much greater control over training datasets for their models. But in many cases, those datasets may contain very personal or sensitive information, raising yet another set of concerns about how that information can best be protected and anonymized. These concerns can often create trade-offs for developers about how that training is done and how much direct access to the training data they themselves have.7
Research on adversarial machine learning has shown that making AI models more robust to data poisoning and adversarial inputs often involves building models that reveal more information about the individual data points used to train those models.8 When sensitive data are used to train these models, this creates a new set of security risks, namely that adversaries will be able to access the training data or infer training data points from the model itself. Trying to secure AI models from this type of inference attack can leave them more susceptible to the adversarial machine learning tactics described above and vice versa. This means that part of maintaining security for artificial intelligence is navigating the trade-offs between these two different, but related, sets of risks.
In the past four years there has been a rapid acceleration of government interest and policy proposals regarding artificial intelligence and security, with 27 governments publishing official AI plans or initiatives by 2019.9 However, many of these strategies focus more on countries plans to fund more AI research activity, train more workers in this field, and encourage economic growth and innovation through development of AI technologies than they do on maintaining security for AI. Countries that have proposed or implemented security-focused policies for AI have emphasized the importance of transparency, testing, and accountability for algorithms and their developersalthough few have gotten to the point of actually operationalizing these policies or figuring out how they would work in practice.
Countries that have proposed or implemented security-focused policies for AI have emphasized the importance of transparency, testing, and accountability for algorithms and their developers.
In the United States, the National Security Commission on Artificial Intelligence (NSCAI) has highlighted the importance of building trustworthy AI systems that can be audited through a rigorous, standardized system of documentation.10 To that end, the commission has recommended the development of an extensive design documentation process and standards for AI models, including what data is used by the model, what the models parameters and weights are, how models are trained and tested, and what results they produce. These transparency recommendations speak to some of the security risks around AI technology, but the commission has not yet extended them to explain how this documentation would be used for accountability or auditing purposes. At the local government level, the New York City Council established an Automated Decision Systems Task Force in 2017 that stressed the importance of security for AI systems; however, the task force provided few concrete recommendations beyond noting that it grappled with finding the right balance between emphasizing opportunities to share information publicly about City tools, systems, and processes, while ensuring that any relevant legal, security, and privacy risks were accounted for.11
A 2018 report by a French parliamentary mission, titled For a Meaningful Artificial Intelligence: Towards a French and European Strategy, offered similarly vague suggestions. It highlighted several potential security threats raised by AI, including manipulation of input data or training data, but concluded only that there was a need for greater collective awareness and more consideration of safety and security risks starting in the design phase of AI systems. It further called on the government to seek the support of specialist actors, who are able to propose solutions thanks to their experience and expertise and advised that the French Agence Nationale pour la Securite des Systemes dinformation (ANSSI) should be responsible for monitoring and assessing the security and safety of AI systems. In a similar vein, Chinas 2017 New Generation AI Development Plan proposed developing security and safety certifications for AI technologies as well as accountability mechanisms and disciplinary measures for their creators, but the plan offered few details as to how these systems might work.
For many governments, the next stage of considering AI security will require figuring out how to implement ideas of transparency, auditing, and accountability to effectively address the risks of insecure AI decision processes and model data leakage.
Transparency will require the development of a more comprehensive documentation process for AI systems, along the lines of the proposals put forth by the NSCAI. Rigorous documentation of how models are developed and tested and what results they produce will enable experts to identify vulnerabilities in the technology, potential manipulations of input data or training data, and unexpected outputs.
Thorough documentation of AI systems will also enable governments to develop effective testing and auditing techniques as well as meaningful certification programs that provide clear guidance to AI developers and users. These audits would, ideally, leverage research on adversarial machine learning and model data leakage to test AI models for vulnerabilities and assess their overall robustness and resilience to different forms of attacks through an AI-focused form of red teaming. Given the dominance of the private sector in developing AI, it is likely that many of these auditing and certification activities will be left to private businesses to carry out. But policymakers could still play a central role in encouraging the development of this market by funding research and standards development in this area and by requiring certifications for their own procurement and use of AI systems.
Finally, policymakers will play a vital role in determining accountability mechanisms and liability regimes to govern AI when security incidents occur. This will involve establishing baseline requirements for what AI developers must do to show they have carried out their due diligence with regard to security and safety, such as obtaining recommended certifications or submitting to rigorous auditing and testing standards. Developers who do not meet these standards and build AI systems that are compromised through data poisoning or adversarial inputs, or that leak sensitive training data, would be liable for the damage caused by their technologies. This will serve as both an incentive for companies to comply with policies related to AI auditing and certification, and also as a means of clarifying who is responsible when AI systems cause serious harm due to a lack of appropriate security measures and what the appropriate penalties are in those circumstances.
The proliferation of AI systems in critical sectorsincluding transportation, health, law enforcement, and military technologymakes clear just how important it is for policymakers to take seriously the security of these systems. This will require governments to look beyond just the economic promise and national security potential of automated decision-making systems to understand how those systems themselves can best be secured through a combination of transparency guidelines, certification and auditing standards, and accountability measures.
The Brookings Institution is a nonprofit organization devoted to independent research and policy solutions. Its mission is to conduct high-quality, independent research and, based on that research, to provide innovative, practical recommendations for policymakers and the public. The conclusions and recommendations of any Brookings publication are solely those of its author(s), and do not reflect the views of the Institution, its management, or its other scholars.
Microsoft provides support to The Brookings InstitutionsArtificial Intelligence and Emerging Technology (AIET) Initiative. The findings, interpretations, and conclusions in this report are not influenced by any donation. Brookings recognizes that the value it provides is in its absolute commitment to quality, independence, and impact. Activities supported by its donors reflect this commitment.
Read the original post:
How to improve cybersecurity for artificial intelligence
- Sleepwalkers Podcast: What Happens When Machines Find Their Creative Muse - WIRED [Last Updated On: November 30th, 2019] [Originally Added On: November 30th, 2019]
- Artificial Intelligence Will Facilitate Growth of Innovative Kinds of VR and AR Platforms - AiThority [Last Updated On: November 30th, 2019] [Originally Added On: November 30th, 2019]
- Manufacturing Leaders' Summit: Realising the promise of Artificial Intelligence - Manufacturer.com [Last Updated On: November 30th, 2019] [Originally Added On: November 30th, 2019]
- How Augmented Reality and Artificial Intelligence Are Helping Entrepreneurs Create a Better Customer Experience - Entrepreneur [Last Updated On: November 30th, 2019] [Originally Added On: November 30th, 2019]
- Global Director of Tech Exploration Discusses Artificial Intelligence and Machine Learning at Anheuser-Busch InBev - Seton Hall University News &... [Last Updated On: November 30th, 2019] [Originally Added On: November 30th, 2019]
- 2019 Artificial Intelligence in Precision Health - Dedication to Discuss & Analyze AI Products Related to Precision Healthcare Already Available -... [Last Updated On: November 30th, 2019] [Originally Added On: November 30th, 2019]
- SC Proposes Introduction Of Artificial Intelligence In Justice Delivery System - Inc42 Media [Last Updated On: November 30th, 2019] [Originally Added On: November 30th, 2019]
- Artificial intelligence will affect Salt Lake, Ogden more than most areas in the nation, study shows - KSL.com [Last Updated On: November 30th, 2019] [Originally Added On: November 30th, 2019]
- The Best Artificial Intelligence Stocks of 2019 -- and The Top AI Stock for 2020 - The Motley Fool [Last Updated On: November 30th, 2019] [Originally Added On: November 30th, 2019]
- It Pays To Break Artificial Intelligence Out Of The Lab, Study Confirms - Forbes [Last Updated On: November 30th, 2019] [Originally Added On: November 30th, 2019]
- Artificial intelligence in FX 'may be hype' - FX Week [Last Updated On: November 30th, 2019] [Originally Added On: November 30th, 2019]
- The Surprising Way Artificial Intelligence Is Transforming Transportation - Forbes [Last Updated On: November 30th, 2019] [Originally Added On: November 30th, 2019]
- Need a New Topic for Thanksgiving Dinner? How to Explain Artificial Intelligence (AI) to Anyone...and Make it Fun! - Forbes [Last Updated On: November 30th, 2019] [Originally Added On: November 30th, 2019]
- The Artificial Intelligence Industry and Global Challenges - Forbes [Last Updated On: November 30th, 2019] [Originally Added On: November 30th, 2019]
- Artificial Intelligence in 2020: The Architecture and the Infrastructure - Gigaom [Last Updated On: December 1st, 2019] [Originally Added On: December 1st, 2019]
- AI IN BANKING: Artificial intelligence could be a near $450 billion opportunity for banks - here are the strat - Business Insider India [Last Updated On: December 1st, 2019] [Originally Added On: December 1st, 2019]
- Seattle Seahawks Select Amazon In Utilizing Artificial Intelligence To Help Make Smarter Decisions On The Field - Forbes [Last Updated On: December 1st, 2019] [Originally Added On: December 1st, 2019]
- Fujifilm Showcases Artificial Intelligence Initiative And Advances at RSNA 2019 - Imaging Technology News [Last Updated On: December 1st, 2019] [Originally Added On: December 1st, 2019]
- The impact of artificial intelligence on humans - Bangkok Post [Last Updated On: December 1st, 2019] [Originally Added On: December 1st, 2019]
- Artificial intelligence gets to work in the automotive industry - Automotive World [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- BioSig Technologies Announces New Collaboration on Development of Artificial Intelligence Solutions in Healthcare - GlobeNewswire [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- Emotion Artificial Intelligence Market Business Opportunities and Forecast from 2019-2025 | Eyesight Technologies, Affectiva - The Connect Report [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- Artificial intelligence-based fitness is promising but may not be for everyone - Livemint [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- Opinion | The artificial intelligence frontier of economic theory - Livemint [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- Pondering the Ethics of Artificial Intelligence in Health Care Kansas City Experts Team Up on Emerging - Flatland [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- Baidu Leads the Way in Innovation with 5712 Artificial Intelligence Patent Applications - GlobeNewswire [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- Artificial Intelligence and National Security, and More from CRS - Secrecy News [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- Artificial intelligence: How to measure the I in AI - TechTalks [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- 52 ideas that changed the world: 26. Artificial intelligence - The Week UK [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- Longer Looks: The Psychology Of Voting; Overexcited Neurons And Artificial Intelligence; And More - Kaiser Health News [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- Maximize The Promise And Minimize The Perils Of Artificial Intelligence (AI) - Forbes [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- Will the next Mozart or Picasso come from artificial intelligence? No, but here's what might happen instead - Ladders [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- China Will Outpace US Artificial Intelligence Capabilities, But Will It Win The Race? Not If We Care About Freedom - Forbes [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- Artificial intelligence apps, Parkinsons and me - BBC News [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- Artificial intelligence will affect Utah more than other states, new study says - Deseret News [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- Aural Analytics Joins Consumer Technology Association Initiative to Set New Standards for Artificial Intelligence in Healthcare - Business Wire [Last Updated On: December 10th, 2019] [Originally Added On: December 10th, 2019]
- TECH 2019: stalls related to technology, artificial intelligence a big draw - The Hindu [Last Updated On: December 10th, 2019] [Originally Added On: December 10th, 2019]
- The Artificially Intelligent Investor: AI And The Future Of Stock Picking - Forbes [Last Updated On: December 10th, 2019] [Originally Added On: December 10th, 2019]
- Defining the Scope of an Artificial Intelligence Project - Toolbox [Last Updated On: December 10th, 2019] [Originally Added On: December 10th, 2019]
- Facebooks Jerome Pesenti Explains the Limitations of Artificial Intelligence Research - NullTX [Last Updated On: December 10th, 2019] [Originally Added On: December 10th, 2019]
- How AI Is Transforming The Art of Stock Picking - Analytics India Magazine [Last Updated On: December 10th, 2019] [Originally Added On: December 10th, 2019]
- Whistle Adds Artificial Intelligence and Workflow Automation to Guest Messaging Platform for Improved Hotel and Lodging Customer Service and Increased... [Last Updated On: December 10th, 2019] [Originally Added On: December 10th, 2019]
- Singapore BIGO Technology Integrates Artificial Intelligence Into Communication Apps for a Holistic and Immersive Experience for Users - Business Wire [Last Updated On: December 10th, 2019] [Originally Added On: December 10th, 2019]
- Commuter Benefits Company, Clarity Benefit Solutions, Gives Insight into Embracing Artificial Intelligence in Human Resources - PRNewswire [Last Updated On: December 10th, 2019] [Originally Added On: December 10th, 2019]
- THE AI IN TRANSPORTATION REPORT: How automakers can use artificial intelligence to cut costs, open new revenue - Business Insider India [Last Updated On: December 10th, 2019] [Originally Added On: December 10th, 2019]
- Chinese Association of Artificial Intelligence is hosting the 6th IEEE International Conference on the AI Pharos Pte Ltd co-organised Cloud Computing... [Last Updated On: December 10th, 2019] [Originally Added On: December 10th, 2019]
- VA launches National Artificial Intelligence Institute to drive research and development - FierceHealthcare [Last Updated On: December 10th, 2019] [Originally Added On: December 10th, 2019]
- SkyWatch Selected to Build Advanced Autonomous Space Systems Using Artificial Intelligence and Big Data Analytics for the Canadian Space Agency -... [Last Updated On: December 10th, 2019] [Originally Added On: December 10th, 2019]
- Microsoft tech expert warns of bias and sexism in artificial intelligence - The Age [Last Updated On: December 10th, 2019] [Originally Added On: December 10th, 2019]
- Artificial Intelligence as Security Solution and Weaponization by Hackers - CISO MAG [Last Updated On: December 10th, 2019] [Originally Added On: December 10th, 2019]
- Baidu Leads the Way in Innovation with 5,712 Artificial Intelligence Patent Applications - MarTech Series [Last Updated On: December 10th, 2019] [Originally Added On: December 10th, 2019]
- Finland seeks to teach 1% of Europeans basics on artificial intelligence - Reuters UK [Last Updated On: December 10th, 2019] [Originally Added On: December 10th, 2019]
- Artificial Intelligence (AI) in Supply Chain Market Worth $21.8 billion by 2027- Exclusive Report by Meticulous Research - GlobeNewswire [Last Updated On: December 10th, 2019] [Originally Added On: December 10th, 2019]
- What Veterans Affairs Aims to Accomplish Through Its Artificial Intelligence Institute - Nextgov [Last Updated On: December 10th, 2019] [Originally Added On: December 10th, 2019]
- The Bot Decade: How AI Took Over Our Lives in the 2010s - Popular Mechanics [Last Updated On: December 10th, 2019] [Originally Added On: December 10th, 2019]
- Benefits & Risks of Artificial Intelligence - Future of ... [Last Updated On: December 10th, 2019] [Originally Added On: December 10th, 2019]
- What is Artificial Intelligence? How Does AI Work? | Built In [Last Updated On: December 10th, 2019] [Originally Added On: December 10th, 2019]
- artificial intelligence | Definition, Examples, and ... [Last Updated On: December 10th, 2019] [Originally Added On: December 10th, 2019]
- Iktos and Almirall Announce Research Collaboration in Artificial Intelligence for New Drug Design - Business Wire [Last Updated On: December 17th, 2019] [Originally Added On: December 17th, 2019]
- Artificial Intelligence Job Demand Could Live Up to Hype - Dice Insights [Last Updated On: December 17th, 2019] [Originally Added On: December 17th, 2019]
- Artificial intelligence is writing the end of Beethoven's unfinished symphony - Euronews [Last Updated On: December 17th, 2019] [Originally Added On: December 17th, 2019]
- LTTE: It's important to know of weaponized artificial intelligence - Rocky Mountain Collegian [Last Updated On: December 17th, 2019] [Originally Added On: December 17th, 2019]
- 8 Artificial Intelligence, Machine Learning and Cloud Predictions To Watch in 2020 - Irish Tech News [Last Updated On: December 17th, 2019] [Originally Added On: December 17th, 2019]
- It's artificial intelligence to the rescue (and response and recovery) - GreenBiz [Last Updated On: December 17th, 2019] [Originally Added On: December 17th, 2019]
- Joint Artificial Intelligence Center Director tells Naval War College audience to 'Dive In' on AI - What'sUpNewp [Last Updated On: December 17th, 2019] [Originally Added On: December 17th, 2019]
- Tip: Seven recommendations for introducing artificial intelligence to your newsroom - Journalism.co.uk [Last Updated On: December 17th, 2019] [Originally Added On: December 17th, 2019]
- Boschs A.I.-powered tech could prevent accidents by staring at you - Digital Trends [Last Updated On: December 17th, 2019] [Originally Added On: December 17th, 2019]
- Schlumberger inks deal to expand artificial intelligence in the oil field - Chron [Last Updated On: December 17th, 2019] [Originally Added On: December 17th, 2019]
- Artificial Intelligence Isn't an Arms Race With China, and the United States Shouldn't Treat It Like One - Foreign Policy [Last Updated On: December 17th, 2019] [Originally Added On: December 17th, 2019]
- Beethovens unfinished tenth symphony to be completed by artificial intelligence - Classic FM [Last Updated On: December 17th, 2019] [Originally Added On: December 17th, 2019]
- Accountability is the key to ethical artificial intelligence, experts say - ComputerWeekly.com [Last Updated On: December 17th, 2019] [Originally Added On: December 17th, 2019]
- Artificial intelligence must be used with care - The Australian Financial Review [Last Updated On: December 17th, 2019] [Originally Added On: December 17th, 2019]
- Squirrel AI Learning Attends the Web Summit to Talk About the Application and Breakthrough of Artificial Intelligence in the Field of Education -... [Last Updated On: December 17th, 2019] [Originally Added On: December 17th, 2019]
- Top Artificial Intelligence Books Released In 2019 That You Must Read - Analytics India Magazine [Last Updated On: December 17th, 2019] [Originally Added On: December 17th, 2019]
- 12 Everyday Applications Of Artificial Intelligence Many People Aren't Aware Of - Forbes [Last Updated On: December 17th, 2019] [Originally Added On: December 17th, 2019]
- Artificial Intelligence might be a factor behind the Climate Change - Digital Information World [Last Updated On: December 21st, 2019] [Originally Added On: December 21st, 2019]
- Innovations in Artificial Intelligence-, Cloud-, and IoT-based Security, 2019 Research Report - ResearchAndMarkets.com - Business Wire [Last Updated On: December 21st, 2019] [Originally Added On: December 21st, 2019]
- Artificial intelligence predictions for 2020: 16 experts have their say - Verdict [Last Updated On: December 21st, 2019] [Originally Added On: December 21st, 2019]
- Tommie Experts: Ethically Educating on Artificial Intelligence at St. Thomas - University of St. Thomas Newsroom [Last Updated On: December 21st, 2019] [Originally Added On: December 21st, 2019]
- How Internet of Things and Artificial Intelligence pave the way to climate neutrality - EURACTIV [Last Updated On: December 21st, 2019] [Originally Added On: December 21st, 2019]