A System-Driven Taxonomy of Attacks and Defenses in Adversarial Machine Learning

Machine Learning (ML) algorithms, specifically supervised learning, are widely used in modern real-world applications, which utilize Computational Intelligence (CI) as their core technology, such as autonomous vehicles, assistive robots, and biometric systems. Attacks that cause misclassifications or mispredictions can lead to erroneous decisions resulting in unreliable operations. Designing robust ML with the ability to provide reliable results in the presence of such attacks has become a top priority in the field of adversarial machine learning. An essential characteristic for rapid development of robust ML is an arms race between attack and defense strategists. However, an important prerequisite for the arms race is access to a well-defined system model so that experiments can be repeated by independent researchers. This article proposes a fine-grained system-driven taxonomy to specify ML applications and adversarial system models in an unambiguous manner such that independent researchers can replicate experiments and escalate the arms race to develop more evolved and robust ML applications. The article provides taxonomies for: 1) the dataset, 2) the ML architecture, 3) the adversary's knowledge, capability, and goal, 4) adversary's strategy, and 5) the defense response. In addition, the relationships among these models and taxonomies are analyzed by proposing an adversarial machine learning cycle. The provided models and taxonomies are merged to form a comprehensive system-driven taxonomy, which represents the arms race between the ML applications and adversaries in recent years. The taxonomies encode best practices in the field and help evaluate and compare the contributions of research works and reveals gaps in the field.

[1]  Dawn Xiaodong Song,et al.  Exploring the Space of Black-box Attacks on Deep Neural Networks , 2017, ArXiv.

[2]  Fabio Roli,et al.  Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning , 2018, CCS.

[3]  Rob Fergus,et al.  Visualizing and Understanding Convolutional Networks , 2013, ECCV.

[4]  Jungwoo Lee,et al.  Generative Adversarial Trainer: Defense to Adversarial Perturbations with GAN , 2017, ArXiv.

[5]  Lujo Bauer,et al.  Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition , 2016, CCS.

[6]  Koosha Sadeghi,et al.  Toward Parametric Security Analysis of Machine Learning Based Cyber Forensic Biometric Systems , 2016, 2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA).

[7]  Blaine Nelson,et al.  The security of machine learning , 2010, Machine Learning.

[8]  Jiqiang Liu,et al.  Adversarial attack and defense in reinforcement learning-from AI security view , 2019, Cybersecur..

[9]  David A. Wagner,et al.  Audio Adversarial Examples: Targeted Attacks on Speech-to-Text , 2018, 2018 IEEE Security and Privacy Workshops (SPW).

[10]  Daniel J. Rice The Driverless Car and the Legal System: Hopes and Fears as the Courts, Regulatory Agencies, Waymo, Tesla, and Uber Deal with this Exciting and Terrifying New Technology , 2019, Journal of Strategic Innovation and Sustainability.

[11]  Blaine Nelson,et al.  Poisoning Attacks against Support Vector Machines , 2012, ICML.

[12]  Wenyuan Xu,et al.  DolphinAttack: Inaudible Voice Commands , 2017, CCS.

[13]  Jianping Yin,et al.  Sampling Attack against Active Learning in Adversarial Environment , 2012, MDAI.

[14]  Yang Xiang,et al.  A survey on security control and attack detection for industrial cyber-physical systems , 2018, Neurocomputing.

[15]  Udam Saini Machine Learning in the Presence of an Adversary: Attacking and Defending the SpamBayes Spam Filter , 2008 .

[16]  Alex Krizhevsky,et al.  Learning Multiple Layers of Features from Tiny Images , 2009 .

[17]  Xiang Zhang,et al.  OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks , 2013, ICLR.

[18]  Michael P. Wellman,et al.  SoK: Security and Privacy in Machine Learning , 2018, 2018 IEEE European Symposium on Security and Privacy (EuroS&P).

[19]  George Danezis,et al.  Learning Universal Adversarial Perturbations with Generative Models , 2017, 2018 IEEE Security and Privacy Workshops (SPW).

[20]  Xiaofeng Wang,et al.  Detecting Adversarial Image Examples in Deep Neural Networks with Adaptive Noise Reduction , 2017, IEEE Transactions on Dependable and Secure Computing.

[21]  Shree K. Nayar,et al.  Attribute and simile classifiers for face verification , 2009, 2009 IEEE 12th International Conference on Computer Vision.

[22]  Logan Engstrom,et al.  Synthesizing Robust Adversarial Examples , 2017, ICML.

[23]  J.C. Principe,et al.  A methodology for information theoretic feature extraction , 1998, 1998 IEEE International Joint Conference on Neural Networks Proceedings. IEEE World Congress on Computational Intelligence (Cat. No.98CH36227).

[24]  Yanjun Qi,et al.  Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks , 2017, NDSS.

[25]  Zhiwei Luo,et al.  Alleviating adversarial attacks via convolutional autoencoder , 2017, 2017 18th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD).

[26]  Fabio Roli,et al.  Infinity-Norm Support Vector Machines Against Adversarial Label Contamination , 2017, ITASEC.

[27]  Dan Boneh,et al.  The Space of Transferable Adversarial Examples , 2017, ArXiv.

[28]  Konrad Rieck,et al.  DREBIN: Effective and Explainable Detection of Android Malware in Your Pocket , 2014, NDSS.

[29]  Luca Rigazio,et al.  Towards Deep Neural Network Architectures Robust to Adversarial Examples , 2014, ICLR.

[30]  Tobias Scheffer,et al.  Static prediction games for adversarial learning problems , 2012, J. Mach. Learn. Res..

[31]  J. Zico Kolter,et al.  Provable defenses against adversarial examples via the convex outer adversarial polytope , 2017, ICML.

[32]  Fabio Roli,et al.  Yes, Machine Learning Can Be More Secure! A Case Study on Android Malware Detection , 2017, IEEE Transactions on Dependable and Secure Computing.

[33]  Murat Kantarcioglu,et al.  A survey of game theoretic approach for adversarial machine learning , 2019, WIREs Data Mining Knowl. Discov..

[34]  Chang Liu,et al.  Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning , 2018, 2018 IEEE Symposium on Security and Privacy (SP).

[35]  J. Zico Kolter,et al.  Wasserstein Adversarial Examples via Projected Sinkhorn Iterations , 2019, ICML.

[36]  David J. Fleet,et al.  Adversarial Manipulation of Deep Representations , 2015, ICLR.

[37]  Terrance E. Boult,et al.  Are Accuracy and Robustness Correlated , 2016, 2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA).

[38]  Richard Kissel,et al.  Glossary of Key Information Security Terms , 2014 .

[39]  Christopher Meek,et al.  Adversarial learning , 2005, KDD '05.

[40]  Chih-Jen Lin,et al.  LIBSVM: A library for support vector machines , 2011, TIST.

[41]  Mingyan Liu,et al.  Spatially Transformed Adversarial Examples , 2018, ICLR.

[42]  Andrew L. Beam,et al.  Adversarial attacks on medical machine learning , 2019, Science.

[43]  Ian J. Goodfellow,et al.  On distinguishability criteria for estimating generative models , 2014, ICLR.

[44]  Ian S. Fischer,et al.  Learning to Attack: Adversarial Transformation Networks , 2018, AAAI.

[45]  Dawn Xiaodong Song,et al.  Adversarial Examples for Generative Models , 2017, 2018 IEEE Security and Privacy Workshops (SPW).

[46]  Radha Poovendran,et al.  Blocking Transferability of Adversarial Examples in Black-Box Learning Systems , 2017, ArXiv.

[47]  Koosha Sadeghi,et al.  A novel spoofing attack against electroencephalogram-based security systems , 2017, 2017 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computed, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI).

[48]  Yevgeniy Vorobeychik,et al.  Adversarial Classification on Social Networks , 2018, AAMAS.

[49]  Paolo Bestagini,et al.  A Counter-Forensic Method for CNN-Based Camera Model Identification , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).

[50]  Atul Prakash,et al.  Robust Physical-World Attacks on Machine Learning Models , 2017, ArXiv.

[51]  Harris Drucker,et al.  Improving generalization performance using double backpropagation , 1992, IEEE Trans. Neural Networks.

[52]  Mingyan Liu,et al.  Characterizing Adversarial Examples Based on Spatial Consistency Information for Semantic Segmentation , 2018, ECCV.

[53]  Zhizheng Wu,et al.  Voice conversion and spoofing attack on speaker verification systems , 2013, 2013 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference.

[54]  Jorge Nocedal,et al.  A Limited Memory Algorithm for Bound Constrained Optimization , 1995, SIAM J. Sci. Comput..

[55]  Moustapha Cissé,et al.  Parseval Networks: Improving Robustness to Adversarial Examples , 2017, ICML.

[56]  Wei Yang,et al.  A residual feature-based replay attack detection approach for brainprint biometric systems , 2016, 2016 IEEE International Workshop on Information Forensics and Security (WIFS).

[57]  Somesh Jha,et al.  Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing , 2014, USENIX Security Symposium.

[58]  Fabio Roli,et al.  Secure Kernel Machines against Evasion Attacks , 2016, AISec@CCS.

[59]  Atul Prakash,et al.  Robust Physical-World Attacks on Deep Learning Visual Classification , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[60]  Eduardo Valle,et al.  Exploring the space of adversarial images , 2015, 2016 International Joint Conference on Neural Networks (IJCNN).

[61]  Patrick P. K. Chan,et al.  Adversarial Feature Selection Against Evasion Attacks , 2016, IEEE Transactions on Cybernetics.

[62]  Geoffrey E. Hinton,et al.  On the importance of initialization and momentum in deep learning , 2013, ICML.

[63]  Sergey Ioffe,et al.  Rethinking the Inception Architecture for Computer Vision , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[64]  Fan Zhang,et al.  Stealing Machine Learning Models via Prediction APIs , 2016, USENIX Security Symposium.

[65]  Claudia Eckert,et al.  Adversarial Malware Binaries: Evading Deep Learning for Malware Detection in Executables , 2018, 2018 26th European Signal Processing Conference (EUSIPCO).

[66]  Pedro M. Domingos,et al.  Adversarial classification , 2004, KDD.

[67]  Yi Shi,et al.  How to steal a machine learning classifier with deep learning , 2017, 2017 IEEE International Symposium on Technologies for Homeland Security (HST).

[68]  Jun Zhu,et al.  Boosting Adversarial Attacks with Momentum , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[69]  Xiaolin Hu,et al.  Defense Against Adversarial Attacks Using High-Level Representation Guided Denoiser , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[70]  Dale Schuurmans,et al.  Learning with a Strong Adversary , 2015, ArXiv.

[71]  Aditi Raghunathan,et al.  Certified Defenses against Adversarial Examples , 2018, ICLR.

[72]  Patrick D. McDaniel,et al.  Adversarial Examples for Malware Detection , 2017, ESORICS.

[73]  Jian Liu,et al.  Defense Against Universal Adversarial Perturbations , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[74]  Pan He,et al.  Adversarial Examples: Attacks and Defenses for Deep Learning , 2017, IEEE Transactions on Neural Networks and Learning Systems.

[75]  Ananthram Swami,et al.  Crafting adversarial input sequences for recurrent neural networks , 2016, MILCOM 2016 - 2016 IEEE Military Communications Conference.

[76]  Jason Yosinski,et al.  Deep neural networks are easily fooled: High confidence predictions for unrecognizable images , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[77]  Fabio Roli,et al.  Poisoning attacks to compromise face templates , 2013, 2013 International Conference on Biometrics (ICB).

[78]  Micah Sherr,et al.  Hidden Voice Commands , 2016, USENIX Security Symposium.

[79]  Angelos Stavrou,et al.  Malicious PDF detection using metadata and structural features , 2012, ACSAC '12.

[80]  Aleksander Kolcz,et al.  Feature Weighting for Improved Classifier Robustness , 2009, CEAS 2009.

[81]  Seyed-Mohsen Moosavi-Dezfooli,et al.  The Robustness of Deep Networks: A Geometrical Perspective , 2017, IEEE Signal Processing Magazine.

[82]  David A. Forsyth,et al.  SafetyNet: Detecting and Rejecting Adversarial Examples Robustly , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[83]  Zhizheng Wu,et al.  Siri On-Device Deep Learning-Guided Unit Selection Text-to-Speech System , 2017, INTERSPEECH.

[84]  Yuguang Fang,et al.  Preserving Model Privacy for Machine Learning in Distributed Systems , 2018, IEEE Transactions on Parallel and Distributed Systems.

[85]  Tao Xie,et al.  MULDEF: Multi-model-based Defense Against Adversarial Examples for Neural Networks , 2018, ArXiv.

[86]  J. Doug Tygar,et al.  Evasion and Hardening of Tree Ensemble Classifiers , 2015, ICML.

[87]  Xin Li,et al.  Adversarial Examples Detection in Deep Networks with Convolutional Filter Statistics , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).

[88]  Suman Jana,et al.  Certified Robustness to Adversarial Examples with Differential Privacy , 2018, 2019 IEEE Symposium on Security and Privacy (SP).

[89]  Xiangliang Zhang,et al.  Adding Robustness to Support Vector Machines Against Adversarial Reverse Engineering , 2014, CIKM.

[90]  Mee Hong Ling,et al.  A Survey on Reinforcement Learning Models and Algorithms for Traffic Signal Control , 2017, ACM Comput. Surv..

[91]  Fabio Roli,et al.  Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization , 2017, AISec@CCS.

[92]  Roberto Caldelli,et al.  Detecting adversarial example attacks to deep neural networks , 2017, CBMI.

[93]  Kang Li,et al.  Security Risks in Deep Learning Implementations , 2017, 2018 IEEE Security and Privacy Workshops (SPW).

[94]  Aleksander Madry,et al.  On Evaluating Adversarial Robustness , 2019, ArXiv.

[95]  William J. Buchanan,et al.  Impact of cyberattacks on stock performance: a comparative study , 2018, Inf. Comput. Secur..

[96]  Terrance E. Boult,et al.  Towards Robust Deep Neural Networks with BANG , 2016, 2018 IEEE Winter Conference on Applications of Computer Vision (WACV).

[97]  Shan Sung Liew,et al.  Bounded activation functions for enhanced training stability of deep neural networks on visual pattern recognition problems , 2016, Neurocomputing.

[98]  Patrick D. McDaniel,et al.  Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust Deep Learning , 2018, ArXiv.

[99]  Hao Chen,et al.  MagNet: A Two-Pronged Defense against Adversarial Examples , 2017, CCS.

[100]  Martín Abadi,et al.  On the Protection of Private Information in Machine Learning Systems: Two Recent Approches , 2017, 2017 IEEE 30th Computer Security Foundations Symposium (CSF).

[101]  Shane Legg,et al.  Human-level control through deep reinforcement learning , 2015, Nature.

[102]  Tara Javidi,et al.  Safe Machine Learning and Defeating Adversarial Attacks , 2018, IEEE Security & Privacy.

[103]  RYAN HEARTFIELD,et al.  A Taxonomy of Attacks and a Survey of Defence Mechanisms for Semantic Social Engineering Attacks , 2015, ACM Comput. Surv..

[104]  Bo An,et al.  Efficient Label Contamination Attacks Against Black-Box Learning Models , 2017, IJCAI.

[105]  Robi Polikar,et al.  Attack Strength vs. Detectability Dilemma in Adversarial Machine Learning , 2018, 2018 International Joint Conference on Neural Networks (IJCNN).

[106]  Seyed Rasoul Etesami,et al.  Adversarial Machine Learning: The Case of Recommendation Systems , 2018, 2018 IEEE 19th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC).

[107]  Zhuolin Yang,et al.  Characterizing Audio Adversarial Examples Using Temporal Dependency , 2018, ICLR.

[108]  Tao Liu,et al.  Security analysis and enhancement of model compressed deep learning systems under adversarial attacks , 2018, 2018 23rd Asia and South Pacific Design Automation Conference (ASP-DAC).

[109]  Claudia Eckert,et al.  Adversarial Label Flips Attack on Support Vector Machines , 2012, ECAI.

[110]  Ananthram Swami,et al.  Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).

[111]  Lawrence Carin,et al.  Certified Adversarial Robustness with Additive Gaussian Noise , 2018, NeurIPS 2019.

[112]  Pieter Abbeel,et al.  InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets , 2016, NIPS.

[113]  James Bailey,et al.  Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality , 2018, ICLR.

[114]  Stephanie Schuckers,et al.  Time-series detection of perspiration as a liveness test in fingerprint devices , 2005, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews).

[115]  Bo Li,et al.  Evasion-Robust Classification on Binary Domains , 2018, ACM Trans. Knowl. Discov. Data.

[116]  Wei Liu,et al.  Adversarial learning games with deep learning models , 2017, 2017 International Joint Conference on Neural Networks (IJCNN).

[117]  Suman Jana,et al.  DeepTest: Automated Testing of Deep-Neural-Network-Driven Autonomous Cars , 2017, 2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE).

[118]  James Newsome,et al.  Paragraph: Thwarting Signature Learning by Training Maliciously , 2006, RAID.

[119]  Ivan Martinovic,et al.  Broken Hearted: How To Attack ECG Biometrics , 2017, NDSS.

[120]  J. Doug Tygar,et al.  Adversarial machine learning , 2019, AISec '11.

[121]  Sandy H. Huang,et al.  Adversarial Attacks on Neural Network Policies , 2017, ICLR.

[122]  Pascal Frossard,et al.  Analysis of classifiers’ robustness to adversarial perturbations , 2015, Machine Learning.

[123]  Yang Song,et al.  PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples , 2017, ICLR.

[124]  Paul Barford,et al.  Data Poisoning Attacks against Autoregressive Models , 2016, AAAI.

[125]  Blaine Nelson,et al.  Can machine learning be secure? , 2006, ASIACCS '06.

[126]  Mingyan Liu,et al.  Generating Adversarial Examples with Adversarial Networks , 2018, IJCAI.

[127]  Jinfeng Yi,et al.  Defend Deep Neural Networks Against Adversarial Examples via Fixed andDynamic Quantized Activation Functions , 2018, ArXiv.

[128]  S. V. N. Vishwanathan,et al.  A Quasi-Newton Approach to Nonsmooth Convex Optimization Problems in Machine Learning , 2008, J. Mach. Learn. Res..

[129]  Andrew Slavin Ross,et al.  Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients , 2017, AAAI.

[130]  Johannes Stallkamp,et al.  Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition , 2012, Neural Networks.

[131]  Ajmal Mian,et al.  Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey , 2018, IEEE Access.

[132]  Erdong Chen,et al.  Facebook immune system , 2011, SNS '11.

[133]  Fabio Roli,et al.  Evasion Attacks against Machine Learning at Test Time , 2013, ECML/PKDD.

[134]  Beilun Wang,et al.  DeepCloak: Masking Deep Neural Network Models for Robustness Against Adversarial Samples , 2017, ICLR.

[135]  Min Wu,et al.  Safety Verification of Deep Neural Networks , 2016, CAV.

[136]  Gordon V. Cormack,et al.  TREC 2006 Spam Track Overview , 2006, TREC.

[137]  Bo Chen,et al.  MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications , 2017, ArXiv.

[138]  Li Chen,et al.  Keeping the Bad Guys Out: Protecting and Vaccinating Deep Learning with JPEG Compression , 2017, ArXiv.

[139]  Nina Narodytska,et al.  Simple Black-Box Adversarial Attacks on Deep Neural Networks , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).

[140]  Bhavani M. Thuraisingham,et al.  Adversarial support vector machine learning , 2012, KDD.

[141]  Terrance E. Boult,et al.  Assessing Threat of Adversarial Examples on Deep Neural Networks , 2016, 2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA).

[142]  Yanjun Qi,et al.  Black-Box Generation of Adversarial Text Sequences to Evade Deep Learning Classifiers , 2018, 2018 IEEE Security and Privacy Workshops (SPW).

[143]  Joan Bruna,et al.  Intriguing properties of neural networks , 2013, ICLR.

[144]  Mehmed M. Kantardzic,et al.  A dynamic‐adversarial mining approach to the security of machine learning , 2018, WIREs Data Mining Knowl. Discov..

[145]  Pavel Laskov,et al.  Practical Evasion of a Learning-Based Classifier: A Case Study , 2014, 2014 IEEE Symposium on Security and Privacy.

[146]  Yanfang Ye,et al.  Adversarial Machine Learning in Malware Detection: Arms Race between Evasion Attack and Defense , 2017, 2017 European Intelligence and Security Informatics Conference (EISIC).

[147]  James C. Spall,et al.  Introduction to Stochastic Search and Optimization. Estimation, Simulation, and Control (Spall, J.C. , 2007 .

[148]  Seong Joon Oh,et al.  Adversarial Image Perturbation for Privacy Protection A Game Theory Perspective , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[149]  Yoshua Bengio,et al.  Generative Adversarial Nets , 2014, NIPS.

[150]  Thomas G. Dietterich What is machine learning? , 2020, Archives of Disease in Childhood.

[151]  Vitaly Shmatikov,et al.  Fooling OCR Systems with Adversarial Text Images , 2018, ArXiv.

[152]  Ling Huang,et al.  Large-Margin Convex Polytope Machine , 2014, NIPS.

[153]  David A. Forsyth,et al.  Standard detectors aren't (currently) fooled by physical adversarial stop signs , 2017, ArXiv.

[154]  Patrick P. K. Chan,et al.  Sensitivity based robust learning for stacked autoencoder against evasion attack , 2017, Neurocomputing.

[155]  Ananthram Swami,et al.  Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.

[156]  Alan L. Yuille,et al.  Mitigating adversarial effects through randomization , 2017, ICLR.

[157]  Patrick D. McDaniel,et al.  Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples , 2016, ArXiv.

[158]  Jin Li,et al.  The security of machine learning in an adversarial setting: A survey , 2019, J. Parallel Distributed Comput..

[159]  Fabio Roli,et al.  Pattern Recognition Systems under Attack: Design Issues and Research Challenges , 2014, Int. J. Pattern Recognit. Artif. Intell..

[160]  Julio Hernandez-Castro,et al.  No Bot Expects the DeepCAPTCHA! Introducing Immutable Adversarial Examples, With Applications to CAPTCHA Generation , 2017, IEEE Transactions on Information Forensics and Security.

[161]  Alexander J. Smola,et al.  Convex Learning with Invariances , 2007, NIPS.

[162]  Dan Boneh,et al.  Ensemble Adversarial Training: Attacks and Defenses , 2017, ICLR.

[163]  Francisco Facchinei,et al.  Convex Optimization, Game Theory, and Variational Inequality Theory , 2010, IEEE Signal Processing Magazine.

[164]  Yuval Elovici,et al.  Quantifying the resilience of machine learning classifiers used for cyber security , 2018, Expert Syst. Appl..

[165]  Mark Anderson,et al.  Developing coercion detection solutions for biometrie security , 2016, 2016 SAI Computing Conference (SAI).

[166]  Geoffrey E. Hinton,et al.  Distilling the Knowledge in a Neural Network , 2015, ArXiv.

[167]  Muhammad Shafique,et al.  Security for Machine Learning-Based Systems: Attacks and Challenges During Training and Inference , 2018, 2018 International Conference on Frontiers of Information Technology (FIT).

[168]  Fabio Roli,et al.  Security Evaluation of Pattern Classifiers under Attack , 2014, IEEE Transactions on Knowledge and Data Engineering.

[169]  David A. Forsyth,et al.  NO Need to Worry about Adversarial Examples in Object Detection in Autonomous Vehicles , 2017, ArXiv.

[170]  Koosha Sadeghi,et al.  Geometrical Analysis of Machine Learning Security in Biometric Authentication Systems , 2017, 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA).

[171]  Aditi Raghunathan,et al.  Semidefinite relaxations for certifying robustness to adversarial examples , 2018, NeurIPS.

[172]  Patrick D. McDaniel,et al.  Adversarial Perturbations Against Deep Neural Networks for Malware Classification , 2016, ArXiv.

[173]  Bill McCarty The Honeynet Arms Race , 2003, IEEE Secur. Priv..

[174]  Antonio Criminisi,et al.  Measuring Neural Net Robustness with Constraints , 2016, NIPS.

[175]  Samy Bengio,et al.  Adversarial Machine Learning at Scale , 2016, ICLR.

[176]  Abdullah Al-Dujaili,et al.  Adversarial Deep Learning for Robust Detection of Binary Encoded Malware , 2018, 2018 IEEE Security and Privacy Workshops (SPW).

[177]  Aleksander Madry,et al.  Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.

[178]  Yu-Hong Dai,et al.  A perfect example for the BFGS method , 2013, Math. Program..

[179]  Arunesh Sinha,et al.  A Learning and Masking Approach to Secure Learning , 2017, GameSec.

[180]  Giovanni Felici,et al.  Hacking smart machines with smarter ones: How to extract meaningful data from machine learning classifiers , 2013, Int. J. Secur. Networks.

[181]  Jonathon Shlens,et al.  Explaining and Harnessing Adversarial Examples , 2014, ICLR.

[182]  Liang Tong,et al.  Adversarial Regression with Multiple Learners , 2018, ICML.

[183]  J. Zico Kolter,et al.  Certified Adversarial Robustness via Randomized Smoothing , 2019, ICML.

[184]  Kilian Q. Weinberger,et al.  Densely Connected Convolutional Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[185]  Mingyan Liu,et al.  Realistic Adversarial Examples in 3D Meshes , 2018, ArXiv.

[186]  Wei Cai,et al.  A Survey on Security Threats and Defensive Techniques of Machine Learning: A Data Driven View , 2018, IEEE Access.

[187]  Seyed-Mohsen Moosavi-Dezfooli,et al.  Universal Adversarial Perturbations , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[188]  Ananthram Swami,et al.  The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).

[189]  Yang Song,et al.  Improving the Robustness of Deep Neural Networks via Stability Training , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[190]  Michael S. Bernstein,et al.  ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.

[191]  Wenbo Guo,et al.  Adversary Resistant Deep Neural Networks with an Application to Malware Detection , 2016, KDD.

[192]  Sergey Ioffe,et al.  Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning , 2016, AAAI.

[193]  Jinfeng Yi,et al.  ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models , 2017, AISec@CCS.

[194]  Dawn Xiaodong Song,et al.  Practical Black-Box Attacks on Deep Neural Networks Using Efficient Query Mechanisms , 2018, ECCV.

[195]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[196]  Daniel Cullina,et al.  Enhancing robustness of machine learning systems via data transformations , 2017, 2018 52nd Annual Conference on Information Sciences and Systems (CISS).

[197]  Jinfeng Yi,et al.  EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples , 2017, AAAI.

[198]  John Musacchio,et al.  A Game-Theoretic Analysis of Adversarial Classification , 2016, IEEE Transactions on Information Forensics and Security.

[199]  Tobias Scheffer,et al.  Stackelberg games for adversarial prediction problems , 2011, KDD.

[200]  Li Fei-Fei,et al.  ImageNet: A large-scale hierarchical image database , 2009, CVPR.

[201]  Ananthram Swami,et al.  Enablers of Adversarial Attacks in Machine Learning , 2018, MILCOM 2018 - 2018 IEEE Military Communications Conference (MILCOM).

[202]  Mesut Ozdag,et al.  Adversarial Attacks and Defenses Against Deep Neural Networks: A Survey , 2018 .

[203]  Kevin Gimpel,et al.  Early Methods for Detecting Adversarial Images , 2016, ICLR.

[204]  Dawn Song,et al.  Physical Adversarial Examples for Object Detectors , 2018, WOOT @ USENIX Security Symposium.

[205]  Ryan R. Curtin,et al.  Detecting Adversarial Samples from Artifacts , 2017, ArXiv.

[206]  Amir Globerson,et al.  Nightmare at test time: robust learning by feature deletion , 2006, ICML.

[207]  Taghi M. Khoshgoftaar,et al.  Deep learning applications and challenges in big data analytics , 2015, Journal of Big Data.

[208]  Koosha Sadeghi,et al.  E-BIAS: A Pervasive EEG-Based Identification and Authentication System , 2015, Q2SWinet@MSWiM.

[209]  David A. Wagner,et al.  Defensive Distillation is Not Robust to Adversarial Examples , 2016, ArXiv.

[210]  Prateek Mittal,et al.  Dimensionality Reduction as a Defense against Evasion Attacks on Machine Learning Classifiers , 2017, ArXiv.

[211]  Xiaoyu Cao,et al.  Mitigating Evasion Attacks to Deep Neural Networks via Region-based Classification , 2017, ACSAC.

[212]  Olga Ohrimenko,et al.  Contamination Attacks and Mitigation in Multi-Party Machine Learning , 2018, NeurIPS.

[213]  Patrizio Campisi,et al.  On the vulnerability of an EEG-based biometric system to hill-climbing attacks algorithms' comparison and possible countermeasures , 2013, 2013 IEEE Sixth International Conference on Biometrics: Theory, Applications and Systems (BTAS).

[214]  Patrick D. McDaniel,et al.  On the (Statistical) Detection of Adversarial Examples , 2017, ArXiv.

[215]  Georgios Paliouras,et al.  A Memory-Based Approach to Anti-Spam Filtering for Mailing Lists , 2004, Information Retrieval.

[216]  Andrew Zisserman,et al.  Return of the Devil in the Details: Delving Deep into Convolutional Nets , 2014, BMVC.

[217]  Paul Schrater,et al.  Adversary Detection in Neural Networks via Persistent Homology , 2017, ArXiv.

[218]  Kenneth O. Stanley,et al.  Mitigating fooling with competitive overcomplete output layer neural networks , 2017, 2017 International Joint Conference on Neural Networks (IJCNN).

[219]  Sharath Pankanti,et al.  Biometrics: a tool for information security , 2006, IEEE Transactions on Information Forensics and Security.

[220]  Alan L. Yuille,et al.  Adversarial Examples for Semantic Segmentation and Object Detection , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[221]  Giorgio Giacinto,et al.  Stealth attacks: An extended insight into the obfuscation effects on Android malware , 2015, Comput. Secur..

[222]  Harris Drucker,et al.  Comparison of learning algorithms for handwritten digit recognition , 1995 .

[223]  Tim Hesterberg,et al.  Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control , 2004, Technometrics.

[224]  Junfeng Yang,et al.  Towards Making Systems Forget with Machine Unlearning , 2015, 2015 IEEE Symposium on Security and Privacy.

[225]  Krishna K. Venkatasubramanian,et al.  Detecting Signal Injection Attack-Based Morphological Alterations of ECG Measurements , 2016, 2016 International Conference on Distributed Computing in Sensor Systems (DCOSS).

[226]  Koosha Sadeghi,et al.  Performance and Security Strength Trade-Off in Machine Learning Based Biometric Authentication Systems , 2017, 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA).

[227]  Junfeng Yang,et al.  DeepXplore: Automated Whitebox Testing of Deep Learning Systems , 2017, SOSP.

[228]  Yaroslav Bulatov,et al.  Multi-digit Number Recognition from Street View Imagery using Deep Convolutional Neural Networks , 2013, ICLR.

[229]  Valentina Zantedeschi,et al.  Efficient Defenses Against Adversarial Attacks , 2017, AISec@CCS.

[230]  Samy Bengio,et al.  Adversarial examples in the physical world , 2016, ICLR.

[231]  Andrew Zisserman,et al.  Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps , 2013, ICLR.

[232]  Ting Wang,et al.  DEEPSEC: A Uniform Platform for Security Analysis of Deep Learning Model , 2019, 2019 IEEE Symposium on Security and Privacy (SP).

[233]  Liang Tong,et al.  Feature Conservation in Adversarial Classifier Evasion: A Case Study , 2017, ArXiv.

[234]  Julian Fiérrez,et al.  Face verification put to test: A hill-climbing attack based on the uphill-simplex algorithm , 2012, 2012 5th IAPR International Conference on Biometrics (ICB).

[235]  Dawn Xiaodong Song,et al.  Curriculum Adversarial Training , 2018, IJCAI.

[236]  Cho-Jui Hsieh,et al.  Towards Robust Neural Networks via Random Self-ensemble , 2017, ECCV.

[237]  Hung Dang,et al.  Evading Classifiers by Morphing in the Dark , 2017, CCS.

[238]  Ian S. Fischer,et al.  Adversarial Transformation Networks: Learning to Generate Adversarial Examples , 2017, ArXiv.

[239]  Patrick D. McDaniel,et al.  Cleverhans V0.1: an Adversarial Machine Learning Library , 2016, ArXiv.

[240]  Lior Rokach,et al.  Generic Black-Box End-to-End Attack Against State of the Art API Call Based Malware Classifiers , 2017, RAID.

[241]  Soumith Chintala,et al.  Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks , 2015, ICLR.

[242]  Dawn Xiaodong Song,et al.  Adversarial Example Defenses: Ensembles of Weak Defenses are not Strong , 2017, ArXiv.

[243]  Marius Kloft,et al.  A framework for quantitative security analysis of machine learning , 2009, AISec '09.

[244]  Rick Kazman,et al.  Towards Explaining Security Defects in Complex Autonomous Aerospace Systems , 2019, AIAA Scitech 2019 Forum.

[245]  Qi Zhao,et al.  Foveation-based Mechanisms Alleviate Adversarial Examples , 2015, ArXiv.

[246]  Xueyong Liu,et al.  Attacks and Defenses towards Machine Learning Based Systems , 2018, CSAE '18.

[247]  Logan Engstrom,et al.  Black-box Adversarial Attacks with Limited Queries and Information , 2018, ICML.

[248]  J. Zico Kolter,et al.  Scaling provable adversarial defenses , 2018, NeurIPS.

[249]  David A. Wagner,et al.  Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).

[250]  Marius Kloft,et al.  Security analysis of online centroid anomaly detection , 2010, J. Mach. Learn. Res..

[251]  Ronald M. Summers,et al.  Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning , 2016, IEEE Transactions on Medical Imaging.

[252]  Mike Andrews Guest Editor's Introduction: The State of Web Security , 2006, IEEE Security & Privacy Magazine.

[253]  Fabio Roli,et al.  Randomized Prediction Games for Adversarial Machine Learning , 2016, IEEE Transactions on Neural Networks and Learning Systems.

[254]  Patrick P. K. Chan,et al.  One-and-a-Half-Class Multiple Classifier Systems for Secure Learning Against Evasion Attacks at Test Time , 2015, MCS.

[255]  Vitaly Shmatikov,et al.  Membership Inference Attacks Against Machine Learning Models , 2016, 2017 IEEE Symposium on Security and Privacy (SP).

[256]  Moustapha Cissé,et al.  Countering Adversarial Images using Input Transformations , 2018, ICLR.

[257]  Arslan Munir,et al.  Vulnerability of Deep Reinforcement Learning to Policy Induction Attacks , 2017, MLDM.

[258]  Matthias Hein,et al.  Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation , 2017, NIPS.

[259]  Seyed-Mohsen Moosavi-Dezfooli,et al.  DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[260]  Kamyar Azizzadenesheli,et al.  Stochastic Activation Pruning for Robust Adversarial Defense , 2018, ICLR.

[261]  GardinerJoseph,et al.  On the Security of Machine Learning in Malware C&C Detection , 2016 .

[262]  Fabio Roli,et al.  Multiple classifier systems for robust classifier design in adversarial environments , 2010, Int. J. Mach. Learn. Cybern..

[263]  Slav Petrov,et al.  Globally Normalized Transition-Based Neural Networks , 2016, ACL.

[264]  Terrance E. Boult,et al.  Adversarial Diversity and Hard Positive Generation , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).

[265]  Jan Hendrik Metzen,et al.  On Detecting Adversarial Perturbations , 2017, ICLR.

[266]  Wenyuan Xu,et al.  Ghost Talk: Mitigating EMI Signal Injection Attacks against Analog Sensors , 2013, 2013 IEEE Symposium on Security and Privacy.