Near-optimal Evasion of Randomized Convex-inducing Classifiers in Adversarial Environments

Classifiers are often used to detect malicious activities in adversarial environments. Sophisticated adversaries would attempt to find information about deployed classifiers in order to strategise different evasion techniques. It is a widely held belief that randomization of decision boundaries/rules of detection systems would introduce further complexities in attempts made by the adversaries for finding minimal adversarial cost (MAC) evading instances. We have extended the results obtained by Nelson et al. [14] and further presented a novel algorithm that can find optimal evading instances in randomized convex-inducing classifiers using polynomial-many queries. Our results have demonstrated that the complexity introduced through randomization only increases the complexity of finding an optimal evading instance by a constant factor and thus the risk of optimal evasion is still present.

[1]  Blaine Nelson,et al.  Exploiting Machine Learning to Subvert Your Spam Filter , 2008, LEET.

[2]  Santosh S. Vempala,et al.  Solving convex programs by random walks , 2004, JACM.

[3]  Pedro M. Domingos,et al.  Adversarial classification , 2004, KDD.

[4]  Fabio Roli,et al.  Evasion Attacks against Machine Learning at Test Time , 2013, ECML/PKDD.

[5]  Fabio Roli,et al.  Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning , 2017, Pattern Recognit..

[6]  Shyhtsun Felix Wu,et al.  On Attacking Statistical Spam Filters , 2004, CEAS.

[7]  Blaine Nelson,et al.  Can machine learning be secure? , 2006, ASIACCS '06.

[8]  Ling Huang,et al.  Query Strategies for Evading Convex-Inducing Classifiers , 2010, J. Mach. Learn. Res..

[9]  Ling Huang,et al.  Classifier Evasion: Models and Open Problems , 2010, PSDML.

[10]  Christopher Meek,et al.  Adversarial learning , 2005, KDD '05.

[11]  Patrizio Campisi,et al.  Hill-Climbing Attacks on Multibiometrics Recognition Systems , 2015, IEEE Transactions on Information Forensics and Security.

[12]  Patrick D. McDaniel,et al.  Adversarial Perturbations Against Deep Neural Networks for Malware Classification , 2016, ArXiv.

[13]  Fabio Roli,et al.  Multiple Classifier Systems under Attack , 2010, MCS.

[14]  David Stevens,et al.  On the hardness of evading combinations of linear classifiers , 2013, AISec.

[15]  J. Fierrez-Aguilar,et al.  Hill-Climbing and Brute-Force Attacks on Biometric Systems: A Case Study in Match-on-Card Fingerprint Verification , 2006, Proceedings 40th Annual 2006 International Carnahan Conference on Security Technology.

[16]  Amir Globerson,et al.  Nightmare at test time: robust learning by feature deletion , 2006, ICML.

[17]  Robert L. Smith,et al.  Efficient Monte Carlo Procedures for Generating Points Uniformly Distributed over Bounded Regions , 1984, Oper. Res..

[18]  Fabio Roli,et al.  Adversarial Biometric Recognition : A review on biometric system security from the adversarial machine-learning perspective , 2015, IEEE Signal Processing Magazine.