Learning a Secure Classifier against Evasion Attack

In security sensitive applications, there is a crafty adversary component which intends to mislead the detection system. The presence of an adversary component conflicts with the stationary data assumption that is a common assumption in most machine learning methods. Since machine learning methods are not inherently adversary-aware, it necessitates to investigate security evaluation of machine learning based detection systems in the adversarial environment. Research in adversarial environment mostly focused on modeling adversarial attacks and evaluating impact of them on learning algorithms, only few studies have devised learning algorithms with improved security. In this paper we propose a secure learning model against evasion attacks on the application of PDF malware detection. The experimental results acknowledge that the proposed method significantly improves the robustness of the learning system against manipulating data and evasion attempts at test time.

[1]  Fabio Roli,et al.  Security Evaluation of Pattern Classifiers under Attack , 2014, IEEE Transactions on Knowledge and Data Engineering.

[2]  Fabio Roli,et al.  Evasion Attacks against Machine Learning at Test Time , 2013, ECML/PKDD.

[3]  Pavel Laskov,et al.  Detection of Malicious PDF Files Based on Hierarchical Document Structure , 2013, NDSS.

[4]  Patrick P. K. Chan,et al.  Adversarial Feature Selection Against Evasion Attacks , 2016, IEEE Transactions on Cybernetics.

[5]  Polina Golland,et al.  Discriminative Direction for Kernel Classifiers , 2001, NIPS.

[6]  Giorgio Giacinto,et al.  Looking at the bag is not enough to find the bomb: an evasion of structural methods for malicious PDF files detection , 2013, ASIA CCS '13.

[7]  Giorgio Giacinto,et al.  A Pattern Recognition System for Malicious PDF Files Detection , 2012, MLDM.

[8]  Blaine Nelson,et al.  Can machine learning be secure? , 2006, ASIACCS '06.

[9]  Christopher Meek,et al.  Adversarial learning , 2005, KDD '05.

[10]  Tobias Scheffer,et al.  Stackelberg games for adversarial prediction problems , 2011, KDD.

[11]  Wei Liu,et al.  On Sparse Feature Attacks in Adversarial Learning , 2014, ICDM.

[12]  Angelos Stavrou,et al.  Malicious PDF detection using metadata and structural features , 2012, ACSAC '12.

[13]  Aleksander Kolcz,et al.  Feature Weighting for Improved Classifier Robustness , 2009, CEAS 2009.

[14]  Ohad Shamir,et al.  Learning to classify with missing and corrupted features , 2008, ICML.

[15]  Claudia Eckert,et al.  Is Feature Selection Secure against Training Data Poisoning? , 2015, ICML.

[16]  Tobias Scheffer,et al.  Static prediction games for adversarial learning problems , 2012, J. Mach. Learn. Res..

[17]  Ali Feizollah,et al.  A Study Of Machine Learning Classifiers for Anomaly-Based Mobile Botnet Detection , 2013 .

[18]  Blaine Nelson,et al.  The security of machine learning , 2010, Machine Learning.

[19]  Yevgeniy Vorobeychik,et al.  Feature Cross-Substitution in Adversarial Classification , 2014, NIPS.

[20]  N. B. Anuar,et al.  Identifying False Alarm for Network Intrusion Detection System Using Hybrid Data Mining and Decision Tree , 2008 .

[21]  Immani Deepak,et al.  SECURITY EVALUATION OF PATTERN CLASSIFIERS UNDER ATTACK , 2018 .