A Fast Security Evaluation of Support Vector Machine Against Evasion Attack

Traditional machine learning techniques may suffer from evasion attack in which an attacker intends to have malicious samples to be misclassified as legitimate at test time by manipulating the samples. It is crucial to evaluate the security of a classifier during the development of a robust system against evasion attack. Current security evaluation for Support Vector Machine (SVM) is very time-consuming, which largely decreases its availability in applications with big data. In this paper, we propose a fast security evaluation of support vector machine against evasion attack. It calculates the security of an SVM by the average distance between a set of malicious samples and the hyperplane. Experimental results show strong correlation between the proposed security evaluation and the current one. Current security measure min-cost-mod runs 24,000 to 551,000 times longer than our proposed one on six datasets.

[1]  Jason Yosinski,et al.  Deep neural networks are easily fooled: High confidence predictions for unrecognizable images , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[2]  Aleksander Kolcz,et al.  Feature Weighting for Improved Classifier Robustness , 2009, CEAS 2009.

[3]  Zhe Lin,et al.  Hybrid adversarial sample crafting for black-box evasion attack , 2017, 2017 International Conference on Wavelet Analysis and Pattern Recognition (ICWAPR).

[4]  Zhe Lin,et al.  Improving robustness of stacked auto-encoder against evasion attack based on weight evenness , 2017, 2017 International Conference on Wavelet Analysis and Pattern Recognition (ICWAPR).

[5]  Blaine Nelson,et al.  The security of machine learning , 2010, Machine Learning.

[6]  Fabio Roli,et al.  Multiple classifier systems for robust classifier design in adversarial environments , 2010, Int. J. Mach. Learn. Cybern..

[7]  Christopher Meek,et al.  Good Word Attacks on Statistical Spam Filters , 2005, CEAS.

[8]  Fabio Roli,et al.  Evade Hard Multiple Classifier Systems , 2009, Applications of Supervised and Unsupervised Ensemble Methods.

[9]  Keke Gai,et al.  Security-Aware Information Classifications Using Supervised Learning for Cloud-Based Cyber Risk Management in Financial Big Data , 2016, 2016 IEEE 2nd International Conference on Big Data Security on Cloud (BigDataSecurity), IEEE International Conference on High Performance and Smart Computing (HPSC), and IEEE International Conference on Intelligent Data and Security (IDS).

[10]  Patrick P. K. Chan,et al.  One-and-a-Half-Class Multiple Classifier Systems for Secure Learning Against Evasion Attacks at Test Time , 2015, MCS.

[11]  Fabio Roli,et al.  Security Evaluation of Pattern Classifiers under Attack , 2014, IEEE Transactions on Knowledge and Data Engineering.

[12]  Fei Zhang,et al.  Robust support vector machines against evasion attacks by random generated malicious samples , 2017, 2017 International Conference on Wavelet Analysis and Pattern Recognition (ICWAPR).

[13]  Christopher Meek,et al.  Adversarial learning , 2005, KDD '05.

[14]  Chih-Jen Lin,et al.  LIBSVM: A library for support vector machines , 2011, TIST.

[15]  Fabio Roli,et al.  Evasion Attacks against Machine Learning at Test Time , 2013, ECML/PKDD.

[16]  Blaine Nelson,et al.  Can machine learning be secure? , 2006, ASIACCS '06.

[17]  Keke Gai,et al.  Smart Resource Allocation Using Reinforcement Learning in Content-Centric Cyber-Physical Systems , 2017, SmartCom.

[18]  Blaine Nelson,et al.  Poisoning Attacks against Support Vector Machines , 2012, ICML.

[19]  Patrick P. K. Chan,et al.  Data sanitization against adversarial label contamination based on data complexity , 2018, Int. J. Mach. Learn. Cybern..

[20]  Patrick P. K. Chan,et al.  Adversarial Feature Selection Against Evasion Attacks , 2016, IEEE Transactions on Cybernetics.