Adversarial Machine Learning-Industry Perspectives

Based on interviews with 28 organizations, we found that industry practitioners are not equipped with tactical and strategic tools to protect, detect and respond to attacks on their Machine Learning (ML) systems. We leverage the insights from the interviews and we enumerate the gaps in perspective in securing machine learning systems when viewed in the context of traditional software security development. We write this paper from the perspective of two personas: developers/ML engineers and security incident responders who are tasked with securing ML systems as they are designed, developed and deployed ML systems. The goal of this paper is to engage researchers to revise and amend the Security Development Lifecycle for industrial-grade software in the adversarial ML era.

[1]  Rossouw von Solms,et al.  Information security management: why standards are important , 1999, Inf. Manag. Comput. Secur..

[2]  Donald F. Towsley,et al.  Code red worm propagation modeling and analysis , 2002, CCS '02.

[3]  Wil M. P. van der Aalst,et al.  Process Mining and Security: Detecting Anomalous Process Executions and Checking Process Conformance , 2005, WISP@ICATPN.

[4]  S. Radack The Common Vulnerability Scoring System (CVSS) , 2007 .

[5]  Uwe Aickelin,et al.  Detecting Anomalous Process Behaviour Using Second Generation Artificial Immune Systems , 2010, Int. J. Unconv. Comput..

[6]  J. Doug Tygar,et al.  Adversarial machine learning , 2019, AISec '11.

[7]  W. B. Roberts,et al.  Machine Learning: The High Interest Credit Card of Technical Debt , 2014 .

[8]  Jonathon Shlens,et al.  Explaining and Harnessing Adversarial Examples , 2014, ICLR.

[9]  Somesh Jha,et al.  Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures , 2015, CCS.

[10]  Ian J. Goodfellow,et al.  Technical Report on the CleverHans v2.1.0 Adversarial Examples Library , 2016 .

[11]  Patrick D. McDaniel,et al.  Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples , 2016, ArXiv.

[12]  Fan Zhang,et al.  Stealing Machine Learning Models via Prediction APIs , 2016, USENIX Security Symposium.

[13]  John Schulman,et al.  Concrete Problems in AI Safety , 2016, ArXiv.

[14]  Michael P. Wellman,et al.  Towards the Science of Security and Privacy in Machine Learning , 2016, ArXiv.

[15]  Brendan Dolan-Gavitt,et al.  BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain , 2017, ArXiv.

[16]  David Wagner,et al.  Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods , 2017, AISec@CCS.

[17]  Mykel J. Kochenderfer,et al.  Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks , 2017, CAV.

[18]  Laurent Orseau,et al.  AI Safety Gridworlds , 2017, ArXiv.

[19]  Vitaly Shmatikov,et al.  Membership Inference Attacks Against Machine Learning Models , 2016, 2017 IEEE Symposium on Security and Privacy (SP).

[20]  Ryan P. Adams,et al.  Motivating the Rules of the Game for Adversarial Example Research , 2018, ArXiv.

[21]  Rahmi Khoirani Common Vulnerability and Exposures (CVE) , 2018 .

[22]  Lujo Bauer,et al.  Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition , 2018, ArXiv.

[23]  Logan Engstrom,et al.  Synthesizing Robust Adversarial Examples , 2017, ICML.

[24]  Nicolas Papernot,et al.  Security and Privacy in Machine Learning , 2018 .

[25]  Inderjit S. Dhillon,et al.  Towards Fast Computation of Certified Robustness for ReLU Networks , 2018, ICML.

[26]  Yugyung Lee,et al.  Code2graph: Automatic Generation of Static Call Graphs for Python Source Code , 2018, 2018 33rd IEEE/ACM International Conference on Automated Software Engineering (ASE).

[27]  Martin Wistuba,et al.  Adversarial Robustness Toolbox v1.0.0 , 2018, 1807.01069.

[28]  Kang Li,et al.  Security Risks in Deep Learning Implementations , 2017, 2018 IEEE Security and Privacy Workshops (SPW).

[29]  Chang Liu,et al.  Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning , 2018, 2018 IEEE Symposium on Security and Privacy (SP).

[30]  Marco Melis,et al.  secml: A Python Library for Secure and Explainable Machine Learning , 2019, ArXiv.

[31]  Cristian Canton-Ferrer,et al.  The Deepfake Detection Challenge (DFDC) Preview Dataset , 2019, ArXiv.

[32]  Thomas G. Dietterich,et al.  Benchmarking Neural Network Robustness to Common Corruptions and Perturbations , 2018, ICLR.

[33]  J. Zico Kolter,et al.  Adversarial Music: Real World Audio Adversary Against Wake-word Detection System , 2019, NeurIPS.

[34]  Jascha Sohl-Dickstein,et al.  Adversarial Reprogramming of Neural Networks , 2018, ICLR.

[35]  Kendra Albert,et al.  Failure Modes in Machine Learning Systems , 2019, ArXiv.

[36]  M. Cannarsa Ethics Guidelines for Trustworthy AI , 2021, The Cambridge Handbook of Lawyering in the Digital Age.

[37]  Camila Bohle Silva,et al.  SIGMA: , 2021, EL MEJOR PERIODISMO CHILENO 2020.