Timing Attacks on Machine Learning: State of the Art

Machine learning plays a significant role in today’s business sectors and governments, in which it is becoming more utilized as tools to help in decision making and automation process. However, these tools are not inherently robust and secure, and could be vulnerable to adversarial modification and cause false classification or risk in the system security. As such, the field of adversarial machine learning has emerged to study vulnerabilities of machine learning models and algorithms, and make them secure against adversarial manipulation. In this paper, we present the recently proposed taxonomy for attacks on machine learning and draw distinctions between other taxonomies. Moreover, this paper brings together the state of the art in theory and practice needed for decision timing attacks on machine learning and defense strategies against them. Considering the increasing research interest in this field, we hope this study provides readers with the essential knowledge to successfully engage in research and practice of machine learning in adversarial environment.

[1]  Salvatore J. Stolfo,et al.  Anagram: A Content Anomaly Detector Resistant to Mimicry Attack , 2006, RAID.

[2]  Lior Rokach,et al.  Generic Black-Box End-to-End Attack Against State of the Art API Call Based Malware Classifiers , 2017, RAID.

[3]  Ling Huang,et al.  Query Strategies for Evading Convex-Inducing Classifiers , 2010, J. Mach. Learn. Res..

[4]  Richard S. Sutton,et al.  Introduction to Reinforcement Learning , 1998 .

[5]  Shyamanta M. Hazarika,et al.  E-Mail Spam Filtering: A Review of Techniques and Trends , 2018 .

[6]  Muhammad Shafique,et al.  Robust Machine Learning Systems: Reliability and Security for Deep Neural Networks , 2018, 2018 IEEE 24th International Symposium on On-Line Testing And Robust System Design (IOLTS).

[7]  Jun Bi,et al.  Source address validation solution with OpenFlow/NOX architecture , 2011, 2011 19th IEEE International Conference on Network Protocols.

[8]  Christophe Diot,et al.  Diagnosing network-wide traffic anomalies , 2004, SIGCOMM.

[9]  J. D. Arias-Londoño,et al.  Fraud detection in big data using supervised and semi-supervised learning techniques , 2017, 2017 IEEE Colombian Conference on Communications and Computing (COLCOM).

[10]  Murat Kantarcioglu,et al.  Adversarial Machine Learning , 2018, Adversarial Machine Learning.

[11]  Patrick D. McDaniel,et al.  Making machine learning robust against adversarial inputs , 2018, Commun. ACM.

[12]  Blaine Nelson,et al.  The security of machine learning , 2010, Machine Learning.

[13]  Zheng Chen,et al.  Effective multi-label active learning for text classification , 2009, KDD.

[14]  Xin Zhang,et al.  End to End Learning for Self-Driving Cars , 2016, ArXiv.

[15]  Mark Craven,et al.  Multiple-Instance Active Learning , 2007, NIPS.

[16]  Tudor Dumitras,et al.  When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks , 2018, USENIX Security Symposium.

[17]  Xinming Huang,et al.  End-to-end learning for lane keeping of self-driving cars , 2017, 2017 IEEE Intelligent Vehicles Symposium (IV).

[18]  Yanjun Qi,et al.  Automatically Evading Classifiers: A Case Study on PDF Malware Classifiers , 2016, NDSS.

[19]  Alexander Zien,et al.  Semi-Supervised Classification by Low Density Separation , 2005, AISTATS.

[20]  Edwin Lughofer,et al.  Hybrid active learning for reducing the annotation effort of operators in classification systems , 2012, Pattern Recognit..

[21]  Milind Tambe,et al.  Security and Game Theory - Algorithms, Deployed Systems, Lessons Learned , 2011 .

[22]  Chenglin Miao,et al.  Towards Data Poisoning Attacks in Crowd Sensing Systems , 2018, MobiHoc.

[23]  Mikhail Belkin,et al.  Regularization and Semi-supervised Learning on Large Graphs , 2004, COLT.

[24]  Blaine Nelson,et al.  Can machine learning be secure? , 2006, ASIACCS '06.

[25]  David H. Wolpert,et al.  No free lunch theorems for optimization , 1997, IEEE Trans. Evol. Comput..

[26]  Roberto Perdisci,et al.  Scalable fine-grained behavioral clustering of HTTP-based malware , 2013, Comput. Networks.

[27]  S. Sitharama Iyengar,et al.  A Survey on Malware Detection Using Data Mining Techniques , 2017, ACM Comput. Surv..

[28]  Yevgeniy Vorobeychik,et al.  Scalable Optimization of Randomized Operational Decisions in Adversarial Classification Settings , 2015, AISTATS.

[29]  Paul Barford,et al.  Explicit Defense Actions Against Test-Set Attacks , 2017, AAAI.

[30]  David A. Wagner,et al.  Audio Adversarial Examples: Targeted Attacks on Speech-to-Text , 2018, 2018 IEEE Security and Privacy Workshops (SPW).

[31]  Fabio Roli,et al.  Security Evaluation of Pattern Classifiers under Attack , 2014, IEEE Transactions on Knowledge and Data Engineering.