Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning

Deep neural networks and machine-learning algorithms are pervasively used in several applications, ranging from computer vision to computer security. In most of these applications, the learning algorithm has to face intelligent and adaptive attackers who can carefully manipulate data to purposely subvert the learning process. As these algorithms have not been originally designed under such premises, they have been shown to be vulnerable to well-crafted, sophisticated attacks, including training-time poisoning and test-time evasion attacks (also known as adversarial examples). The problem of countering these threats and learning secure classifiers in adversarial settings has thus become the subject of an emerging, relevant research field known as adversarial machine learning. The purposes of this tutorial are: (a) to introduce the fundamentals of adversarial machine learning to the security community; (b) to illustrate the design cycle of a learning-based pattern recognition system for adversarial tasks; (c) to present novel techniques that have been recently proposed to assess performance of pattern classifiers and deep learning algorithms under attack, evaluate their vulnerabilities, and implement defense strategies that make learning algorithms more robust to attacks; and (d) to show some applications of adversarial machine learning to pattern recognition tasks like object recognition in images, biometric identity recognition, spam and malware detection.

[1]  Blaine Nelson,et al.  Can machine learning be secure? , 2006, ASIACCS '06.

[2]  Ling Huang,et al.  ANTIDOTE: understanding and defending against poisoning of anomaly detectors , 2009, IMC '09.

[3]  Nello Cristianini,et al.  Machine Learning and Knowledge Discovery in Databases (ECML PKDD) , 2010 .

[4]  J. Doug Tygar,et al.  Adversarial machine learning , 2019, AISec '11.

[5]  Blaine Nelson,et al.  Poisoning Attacks against Support Vector Machines , 2012, ICML.

[6]  Angelos Stavrou,et al.  Malicious PDF detection using metadata and structural features , 2012, ACSAC '12.

[7]  Fabio Roli,et al.  Evasion Attacks against Machine Learning at Test Time , 2013, ECML/PKDD.

[8]  Fabio Roli,et al.  Security Evaluation of Pattern Classifiers under Attack , 2014, ArXiv.

[9]  Pavel Laskov,et al.  Practical Evasion of a Learning-Based Classifier: A Case Study , 2014, 2014 IEEE Symposium on Security and Privacy.

[10]  Joan Bruna,et al.  Intriguing properties of neural networks , 2013, ICLR.

[11]  Claudia Eckert,et al.  Is Feature Selection Secure against Training Data Poisoning? , 2015, ICML.

[12]  Yanjun Qi,et al.  Automatically Evading Classifiers: A Case Study on PDF Malware Classifiers , 2016, NDSS.

[13]  Terrance E. Boult,et al.  Towards Open Set Deep Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[14]  Percy Liang,et al.  Understanding Black-box Predictions via Influence Functions , 2017, ICML.

[15]  Ananthram Swami,et al.  Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.

[16]  David A. Wagner,et al.  Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).

[17]  Fabio Roli,et al.  Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning , 2017, Pattern Recognit..

[18]  David A. Wagner,et al.  Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.

[19]  Chang Liu,et al.  Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning , 2018, 2018 IEEE Symposium on Security and Privacy (SP).