Adversarial attacks in machine learning: What they are and how to stop them – VentureBeat

Elevate your enterprise data technology and strategy at Transform 2021.

Adversarial machine learning, a technique that attempts to fool models with deceptive data, is a growing threat in the AI and machine learning research community. The most common reason is to cause a malfunction in a machine learning model. An adversarial attack might entail presenting a model with inaccurate or misrepresentative data as its training, or introducing maliciously designed data to deceive an already trained model.

As the U.S. National Security Commission on Artificial Intelligences 2019 interim report notes, a very small percentage of current AI research goes toward defending AI systems against adversarial efforts. Some systems already used in production could be vulnerable to attack. For example, by placing a few small stickers on the ground, researchers showed that they could cause a self-driving car to move into the opposite lane of traffic. Other studies have shown that making imperceptible changes to an image can trick a medical analysis system into classifying a benign mole as malignant, and that pieces of tape can deceive a computer vision system into wrongly classifying a stop signas a speed limit sign.

The increasing adoption of AI is likely to correlate with a rise in adversarial attacks. Its a never-ending arms race, but fortunately, effective approaches exist today to mitigate the worst of the attacks.

Attacks against AI models are often categorized along three primary axes influence on the classifier, the security violation, and their specificity and can be further subcategorized as white box or black box. In white box attacks, the attacker has access to the models parameters, while in black box attacks, the attacker has no access to these parameters.

An attack can influence the classifier i.e., the model by disrupting the model as it makes predictions, while a security violation involves supplying malicious data that gets classified as legitimate. A targeted attack attempts to allow a specific intrusion or disruption, or alternatively to create general mayhem.

Evasion attacks are the most prevalent type of attack, where data are modified to evade detection or to be classified as legitimate. Evasion doesnt involve influence over the data used to train a model, but it is comparable to the way spammers and hackers obfuscate the content of spam emails and malware. An example of evasion is image-based spam in which spam content is embedded within an attached image to evade analysis by anti-spam models. Another example is spoofing attacks against AI-powered biometric verification systems..

Poisoning, another attack type, is adversarial contamination of data. Machine learning systems are often retrained using data collected while theyre in operation, and an attacker can poison this data by injecting malicious samples that subsequently disrupt the retraining process. An adversary might input data during the training phase thats falsely labeled as harmless when its actually malicious. For example, large language models like OpenAIs GPT-3 can reveal sensitive, private information when fed certain words and phrases, research has shown.

Meanwhile, model stealing, also called model extraction, involves an adversary probing a black box machine learning system in order to either reconstruct the model or extract the data that it was trained on. This can cause issues when either the training data or the model itself is sensitive and confidential. For example, model stealing could be used to extract a proprietary stock-trading model, which the adversary could then use for their own financial gain.

Plenty of examples of adversarial attacks have been documented to date. One showed its possible to 3D-print a toy turtle with a texture that causes Googles object detection AI to classify it as a rifle, regardless of the angle from which the turtle is photographed. In another attack, a machine-tweaked image of a dog was shown to look like a cat to both computers and humans. So-called adversarial patterns on glasses or clothing have been designed to deceive facial recognition systems and license plate readers. And researchers have created adversarial audio inputs to disguise commands to intelligent assistants in benign-sounding audio.

In apaper published in April, researchers from Google and the University of California at Berkeley demonstrated that even the best forensic classifiers AI systems trained to distinguish between real and synthetic content are susceptible to adversarial attacks. Its a troubling, if not necessarily new, development for organizations attempting to productize fake media detectors, particularly considering the meteoric riseindeepfakecontent online.

One of the most infamous recent examples is Microsofts Tay, a Twitter chatbot programmed to learn to participate in conversation through interactions with other users. While Microsofts intention was that Tay would engage in casual and playful conversation, internet trolls noticed the system had insufficient filters and began feeding Tay profane and offensive tweets. The more these users engaged, the more offensive Tays tweets became, forcing Microsoft to shut the bot down just 16 hours after its launch.

As VentureBeat contributor Ben Dickson notes, recent years have seen a surge in the amount of research on adversarial attacks. In 2014, there were zero papers on adversarial machine learning submitted to the preprint server Arxiv.org, while in 2020, around 1,100 papers on adversarial examples and attacks were. Adversarial attacks and defense methods have also become a highlight of prominent conferences including NeurIPS, ICLR, DEF CON, Black Hat, and Usenix.

With the rise in interest in adversarial attacks and techniques to combat them, startups like Resistant AI are coming to the fore with products that ostensibly harden algorithms against adversaries. Beyond these new commercial solutions, emerging research holds promise for enterprises looking to invest in defenses against adversarial attacks.

One way to test machine learning models for robustness is with whats called a trojan attack, which involves modifying a model to respond to input triggers that cause it to infer an incorrect response. In an attempt to make these tests more repeatable and scalable, researchers at Johns Hopkins University developed a framework dubbed TrojAI, a set of tools that generate triggered data sets and associated models with trojans. They say that itll enable researchers to understand the effects of various data set configurations on the generated trojaned models and help to comprehensively test new trojan detection methods to harden models.

The Johns Hopkins team is far from the only one tackling the challenge of adversarial attacks in machine learning. In February, Google researchers released apaper describing a framework that either detects attacks or pressures the attackers to produce images that resemble the target class of images. Baidu, Microsoft, IBM, and Salesforce offer toolboxes Advbox, Counterfit, Adversarial Robustness Toolbox, and Robustness Gym for generating adversarial examples that can fool models in frameworks like MxNet, Keras, Facebooks PyTorch and Caffe2, Googles TensorFlow, and Baidus PaddlePaddle. And MITs Computer Science and Artificial Intelligence Laboratory recently released a tool called TextFoolerthat generates adversarial text to strengthen natural language models.

More recently, Microsoft, the nonprofit Mitre Corporation, and 11 organizations including IBM, Nvidia, Airbus, and Bosch releasedtheAdversarial ML Threat Matrix, an industry-focused open framework designed to help security analysts to detect, respond to, and remediate threats against machine learning systems. Microsoft says it worked with Mitre to build a schema that organizes the approaches malicious actors employ in subverting machine learning models, bolstering monitoring strategies around organizations mission-critical systems.

The future might bring outside-the-box approaches, including several inspired by neuroscience. For example, researchers at MIT and MIT-IBM Watson AI Lab have found that directly mapping the features of the mammalian visual cortex onto deep neural networks creates AI systems that are more robust to adversarial attacks. While adversarial AI is likely to become a never-ending arms race, these sorts of solutions instill hope that attackers wont always have the upper hand and that biological intelligence still has a lot of untapped potential.

Visit link:
Adversarial attacks in machine learning: What they are and how to stop them - VentureBeat

Related Posts
This entry was posted in $1$s. Bookmark the permalink.