On Robustness to Adversarial Examples and Polynomial Optimization

We study the design of computationally efficient algorithms with provable guarantees, that are robust to adversarial (test time) perturbations. While there has been an explosion of recent work on this topic due to its connections to test time robustness of deep networks, there is limited theoretical understanding of several basic questions like (i) when and how can one design provably robust learning algorithms? (ii) what is the price of achieving robustness to adversarial examples in a computationally efficient manner? The main contribution of this work is to exhibit a strong connection between achieving robustness to adversarial examples, and a rich class of polynomial optimization problems, thereby making progress on the above questions. In particular, we leverage this connection to (a) design computationally efficient robust algorithms with provable guarantees for a large class of hypothesis, namely linear classifiers and degree-2 polynomial threshold functions~(PTFs), (b) give a precise characterization of the price of achieving robustness in a computationally efficient manner for these classes, (c) design efficient algorithms to certify robustness and generate adversarial attacks in a principled manner for 2-layer neural networks. We empirically demonstrate the effectiveness of these attacks on real data.

[1]  Prateek Mittal,et al.  PAC-learning in the presence of evasion adversaries , 2018, NIPS 2018.

[2]  Ryan O'Donnell,et al.  SDP gaps and UGC-hardness for MAXCUTGAIN , 2006, 2006 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS'06).

[3]  Po-Ling Loh,et al.  Adversarial Risk Bounds for Binary Classification via Function Transformation , 2018, ArXiv.

[4]  Aleksander Madry,et al.  Adversarially Robust Generalization Requires More Data , 2018, NeurIPS.

[5]  Saeed Mahloujifar,et al.  Can Adversarially Robust Learning Leverage Computational Hardness? , 2018, ALT.

[6]  J. Zico Kolter,et al.  Provable defenses against adversarial examples via the convex outer adversarial polytope , 2017, ICML.

[7]  Aditi Raghunathan,et al.  Certified Defenses against Adversarial Examples , 2018, ICLR.

[8]  A. Grothendieck Résumé de la théorie métrique des produits tensoriels topologiques , 1996 .

[9]  John C. Duchi,et al.  Certifying Some Distributional Robustness with Principled Adversarial Training , 2017, ICLR.

[10]  Joan Bruna,et al.  Intriguing properties of neural networks , 2013, ICLR.

[11]  Yin Tat Lee,et al.  Adversarial Examples from Cryptographic Pseudo-Random Generators , 2018, ArXiv.

[12]  Mario Baum An Introduction To Computational Learning Theory , 2016 .

[13]  Amir Globerson,et al.  Nightmare at test time: robust learning by feature deletion , 2006, ICML.

[14]  Moses Charikar,et al.  Maximizing quadratic programs: extending Grothendieck's inequality , 2004, 45th Annual IEEE Symposium on Foundations of Computer Science.

[15]  Shie Mannor,et al.  Robustness and Regularization of Support Vector Machines , 2008, J. Mach. Learn. Res..

[16]  Varun Kanade,et al.  On the Hardness of Robust Classification , 2019, Electron. Colloquium Comput. Complex..

[17]  Uriel Feige,et al.  Learning and inference in the presence of corrupted inputs , 2015, COLT.

[18]  Ryan P. Adams,et al.  Motivating the Rules of the Game for Adversarial Example Research , 2018, ArXiv.

[19]  Noga Alon,et al.  Quadratic forms on graphs , 2005, STOC '05.

[20]  Melvyn Sim,et al.  The Price of Robustness , 2004, Oper. Res..

[21]  Aleksander Madry,et al.  Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.

[22]  Yishay Mansour,et al.  Improved generalization bounds for robust learning , 2018, ALT.

[23]  Alexander J. Smola,et al.  Second Order Cone Programming Approaches for Handling Missing and Uncertain Data , 2006, J. Mach. Learn. Res..

[24]  Seyed-Mohsen Moosavi-Dezfooli,et al.  Robustness of classifiers: from adversarial to random noise , 2016, NIPS.

[25]  Shie Mannor,et al.  Robustness and generalization , 2010, Machine Learning.

[26]  Saeed Mahloujifar,et al.  Adversarial Risk and Robustness: General Definitions and Implications for the Uniform Distribution , 2018, NeurIPS.

[27]  Y. Nesterov Semidefinite relaxation and nonconvex quadratic optimization , 1998 .

[28]  Umesh V. Vazirani,et al.  An Introduction to Computational Learning Theory , 1994 .

[29]  Kannan Ramchandran,et al.  Rademacher Complexity for Adversarially Robust Generalization , 2018, ICML.

[30]  Noga Alon,et al.  Approximating the cut-norm via Grothendieck's inequality , 2004, STOC '04.

[31]  Laurent El Ghaoui,et al.  Robust Solutions to Least-Squares Problems with Uncertain Data , 1997, SIAM J. Matrix Anal. Appl..

[32]  Ilya P. Razenshteyn,et al.  Adversarial examples from computational constraints , 2018, ICML.

[33]  Julien Mairal,et al.  On Regularization and Robustness of Deep Neural Networks , 2018, ArXiv.

[34]  Aleksander Madry,et al.  Robustness May Be at Odds with Accuracy , 2018, ICLR.

[35]  Arkadi Nemirovski,et al.  Robust solutions of uncertain linear programs , 1999, Oper. Res. Lett..

[36]  Subhash Khot,et al.  Linear Equations Modulo 2 and the L1 Diameter of Convex Bodies , 2008, 48th Annual IEEE Symposium on Foundations of Computer Science (FOCS'07).

[37]  Chiranji b B hattacharyya Robust Classification of noisy data using Second Order Cone Programming approach , 2001 .

[38]  Ryan O'Donnell,et al.  Analysis of Boolean Functions , 2014, ArXiv.

[39]  Guy Kindler,et al.  On non-approximability for quadratic programs , 2005, 46th Annual IEEE Symposium on Foundations of Computer Science (FOCS'05).

[40]  Saeed Mahloujifar,et al.  The Curse of Concentration in Robust Learning: Evasion and Poisoning Attacks from Concentration of Measure , 2018, AAAI.