Microsoft/MITRE group declares war on machine learning vulnerabilities with Adversarial ML Threat Matrix – Diginomica

(Pixabay)

The extraordinary advances in machine learning that drive the increasing accuracy and reliability of artificial intelligence systems have been matched by a corresponding growth in malicious attacks by bad actors seeking to exploit a new breed of vulnerabilities designed to distort the results.

Microsoft reports it has seen a notable increase in attacks on commercial ML systems over the past four years. Other reports have also brought attention to this problem.Gartner's Top 10 Strategic Technology Trends for 2020, published in October 2019, predicts that:

Through 2022, 30% of all AI cyberattacks will leverage training-data poisoning, AI model theft, or adversarial samples to attack AI-powered systems.

Training data poisoning happens when an adversary is able to introduce bad data into your model's training pool, and hence get it to learn things that are wrong. One approach is to target your ML's availability; the other targets its integrity (commonly known as "backdoor" attacks). Availability attacks aim to inject so much bad data into your system that whatever boundaries your model learns are basically worthless. Integrity attacks are more insidious because the developer isn't aware of them so attackers can sneak in and get the system to do what they want.

Model theft techniques are used to recover models or information about data used during training which is a major concern because AI models represent valuable intellectual property trained on potentially sensitive data including financial trades, medical records, or user transactions.The aim of adversaries is to recreate AI models by utilizing the public API and refining their own model using it as a guide.

Adversarial examples are inputs to machine learning models that attackers haveintentionally designed to cause the model to make a mistake.Basically, they are like optical illusions for machines.

All of these methods are dangerous and growing in both volume and sophistication. As Ann Johnson Corporate Vice President, SCI Business Development at Microsoft wrote in ablog post:

Despite the compelling reasons to secure ML systems, Microsoft's survey spanning 28 businesses found that most industry practitioners have yet to come to terms with adversarial machine learning. Twenty-five out of the 28 businesses indicated that they don't have the right tools in place to secure their ML systems. What's more, they are explicitly looking for guidance. We found that preparation is not just limited to smaller organizations. We spoke to Fortune 500 companies, governments, non-profits, and small and mid-sized organizations.

Responding to the growing threat, last week, Microsoft, the nonprofit MITRE Corporation, and 11 organizations including IBM, Nvidia, Airbus, and Bosch released theAdversarial ML Threat Matrix, an industry-focused open framework designed to help security analysts to detect, respond to, and remediate threats against machine learning systems. Microsoft says it worked with MITRE to build a schema that organizes the approaches employed by malicious actors in subverting machine learning models, bolstering monitoring strategies around organizations' mission-critical systems.Said Johnson:

Microsoft worked with MITRE to create the Adversarial ML Threat Matrix, because we believe the first step in empowering security teams to defend against attacks on ML systems, is to have a framework that systematically organizes the techniques employed by malicious adversaries in subverting ML systems. We hope that the security community can use the tabulated tactics and techniques to bolster their monitoring strategies around their organization's mission critical ML systems.

The Adversarial ML Threat, modeled after the MITRE ATT&CK Framework, aims to address the problem with a curated set of vulnerabilities and adversary behaviors that Microsoft and MITRE vetted to be effective against production systems. With input from researchers at the University of Toronto, Cardiff University, and the Software Engineering Institute at Carnegie Mellon University, Microsoft and MITRE created a list of tactics that correspond to broad categories of adversary action.

Techniques in the schema fall within one tactic and are illustrated by a series of case studies covering how well-known attacks such as the Microsoft Tay poisoning, the Proofpoint evasion attack, and other attacks could be analyzed using the Threat Matrix. Noted Charles Clancy, MITRE's chief futurist, senior vice president, and general manager of MITRE Labs:

Unlike traditional cybersecurity vulnerabilities that are tied to specific software and hardware systems, adversarial ML vulnerabilities are enabled by inherent limitations underlying ML algorithms. Data can be weaponized in new ways which requires an extension of how we model cyber adversary behavior, to reflect emerging threat vectors and the rapidly evolving adversarial machine learning attack lifecycle.

Mikel Rodriguez, a machine learning researcher at MITRE who also oversees MITRE's Decision Science research programs, said that AI is now at the same stage now where the internet was in the late 1980s when people were focused on getting the technology to work and not thinking that much about longer term implications for security and privacy. That, he says, was a mistake that we can learn from.

The Adversarial ML Threat Matrix will allow security analysts to work with threat models that are grounded in real-world incidents that emulate adversary behavior with machine learning and to develop a common language that allows for better communications and collaboration.

View post:
Microsoft/MITRE group declares war on machine learning vulnerabilities with Adversarial ML Threat Matrix - Diginomica

Related Posts

Comments are closed.