Sample Complexity Bounds for Robustly Learning Decision Lists against Evasion Attacks

A fundamental problem in adversarial machine learning is to quantify how much training data is needed in the presence of evasion attacks. In this paper we address this issue within the framework of PAC learning, focusing on the class of decision lists. Given that distributional assumptions are essential in the adversarial setting, we work with probability distributions on the input data that satisfy a Lipschitz condition: nearby points have similar probability. Our key results illustrate that the adversary's budget (that is, the number of bits it can perturb on each input) is a fundamental quantity in determining the sample complexity of robust learning. Our first main result is a sample-complexity lower bound: the class of monotone conjunctions (essentially the simplest non-trivial hypothesis class on the Boolean hypercube) and any superclass has sample complexity at least exponential in the adversary's budget. Our second main result is a corresponding upper bound: for every fixed k the class of k-decision lists has polynomial sample complexity against a log(n)-bounded adversary. This sheds further light on the question of whether an efficient PAC learning algorithm can always be used as an efficient log(n)-robust learning algorithm under the uniform distribution.

[1]  Varun Kanade,et al.  On the Hardness of Robust Classification , 2019, Electron. Colloquium Comput. Complex..

[2]  Daniel Cullina,et al.  Lower Bounds on Adversarial Robustness from Optimal Transport , 2019, NeurIPS.

[3]  Saeed Mahloujifar,et al.  Lower Bounds for Adversarially Robust PAC Learning under Evasion and Hybrid Attacks , 2019, 2020 19th IEEE International Conference on Machine Learning and Applications (ICMLA).

[4]  Aleksander Madry,et al.  Adversarial Examples Are Not Bugs, They Are Features , 2019, NeurIPS.

[5]  Alberto L. Sangiovanni-Vincentelli,et al.  A Formalization of Robustness for Deep Neural Networks , 2019, ArXiv.

[6]  Nathan Srebro,et al.  VC Classes are Adversarially Robustly Learnable, but Only Improperly , 2019, COLT.

[7]  Saeed Mahloujifar,et al.  Adversarial Risk and Robustness: General Definitions and Implications for the Uniform Distribution , 2018, NeurIPS.

[8]  Saeed Mahloujifar,et al.  Can Adversarially Robust Learning Leverage Computational Hardness? , 2018, ALT.

[9]  Saeed Mahloujifar,et al.  The Curse of Concentration in Robust Learning: Evasion and Poisoning Attacks from Concentration of Measure , 2018, AAAI.

[10]  Tom Goldstein,et al.  Are adversarial examples inevitable? , 2018, ICLR.

[11]  Aleksander Madry,et al.  Robustness May Be at Odds with Accuracy , 2018, ICLR.

[12]  Ilya P. Razenshteyn,et al.  Adversarial examples from computational constraints , 2018, ICML.

[13]  Hamza Fawzi,et al.  Adversarial vulnerability for any classifier , 2018, NeurIPS.

[14]  Martin Wattenberg,et al.  Adversarial Spheres , 2018, ICLR.

[15]  Fabio Roli,et al.  Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning , 2017, Pattern Recognit..

[16]  Seyed-Mohsen Moosavi-Dezfooli,et al.  Robustness of classifiers: from adversarial to random noise , 2016, NIPS.

[17]  Pascal Frossard,et al.  Analysis of classifiers’ robustness to adversarial perturbations , 2015, Machine Learning.

[18]  Joan Bruna,et al.  Intriguing properties of neural networks , 2013, ICLR.

[19]  Fabio Roli,et al.  Evasion Attacks against Machine Learning at Test Time , 2013, ECML/PKDD.

[20]  Varun Kanade,et al.  Learning Using Local Membership Queries , 2012, COLT.

[21]  P. Cochat,et al.  Et al , 2008, Archives de pediatrie : organe officiel de la Societe francaise de pediatrie.

[22]  Christopher Meek,et al.  Adversarial learning , 2005, KDD '05.

[23]  Pedro M. Domingos,et al.  Adversarial classification , 2004, KDD.

[24]  Varun Kanade,et al.  On the Hardness of Robust Classi cation , 2019 .

[25]  Uriel Feige,et al.  Learning and inference in the presence of corrupted inputs , 2015, COLT.

[26]  Christopher Meek,et al.  Good Word Attacks on Statistical Spam Filters , 2005, CEAS.

[27]  Ronald L. Rivest,et al.  Learning decision lists , 2004, Machine Learning.

[28]  I. Campbell,et al.  Volume 30 , 2002 .

[29]  L. Valiant,et al.  A theory of the learnable , 1984, CACM.