Adversarial Machine Learning and the CFAA – Security Boulevard

I just co-authored a paper on the legal risks of doing machine learning research, given the current state of the Computer Fraud and Abuse Act:

Abstract: Adversarial Machine Learning is booming with ML researchers increasingly targeting commercial ML systems such as those used in Facebook, Tesla, Microsoft, IBM, Google to demonstrate vulnerabilities. In this paper, we ask, What are the potential legal risks to adversarial ML researchers when they attack ML systems? Studying or testing the security of any operational system potentially runs afoul the Computer Fraud and Abuse Act (CFAA), the primary United States federal statute that creates liability for hacking. We claim that Adversarial ML research is likely no different. Our analysis show that because there is a split in how CFAA is interpreted, aspects of adversarial ML attacks, such as model inversion, membership inference, model stealing, reprogramming the ML system and poisoning attacks, may be sanctioned in some jurisdictions and not penalized in others. We conclude with an analysis predicting how the US Supreme Court may resolve some present inconsistencies in the CFAAs application in Van Buren v. United States, an appeal expected to be decided in 2021. We argue that the court is likely to adopt a narrow construction of the CFAA, and that this will actually lead to better adversarial ML security outcomes in the long term.

Medium post on the paper. News article, which uses our graphic without attribution.

*** This is a Security Bloggers Network syndicated blog from Schneier on Security authored by Bruce Schneier. Read the original post at: https://www.schneier.com/blog/archives/2020/07/adversarial_mac_1.html

View original post here:
Adversarial Machine Learning and the CFAA - Security Boulevard

Related Posts
This entry was posted in $1$s. Bookmark the permalink.