The Convex Relaxation Barrier, Revisited: Tightened Single-Neuron Relaxations for Neural Network Verification

We improve the effectiveness of propagation- and linear-optimization-based neural network verification algorithms with a new tightened convex relaxation for ReLU neurons. Unlike previous single-neuron relaxations which focus only on the univariate input space of the ReLU, our method considers the multivariate input space of the affine pre-activation function preceding the ReLU. Using results from submodularity and convex geometry, we derive an explicit description of the tightest possible convex relaxation when this multivariate input is over a box domain. We show that our convex relaxation is significantly stronger than the commonly used univariate-input relaxation which has been proposed as a natural convex relaxation barrier for verification. While our description of the relaxation may require an exponential number of inequalities, we show that they can be separated in linear time and hence can be efficiently incorporated into optimization algorithms on an as-needed basis. Based on this novel relaxation, we design two polynomial-time algorithms for neural network verification: a linear-programming-based algorithm that leverages the full power of our relaxation, and a fast propagation algorithm that generalizes existing approaches. In both cases, we show that for a modest increase in computational effort, our strengthened relaxation enables us to verify a significantly larger number of instances compared to similar algorithms.

[1]  Rudy Bunel,et al.  An efficient nonconvex reformulation of stagewise convex optimization problems , 2020, NeurIPS.

[2]  Panagiotis Kouvaros,et al.  Efficient Verification of ReLU-Based Neural Networks via Dependency Analysis , 2020, AAAI.

[3]  Brendon G. Anderson,et al.  Tightened Convex Relaxations for Neural Network Robustness Certification , 2020, 2020 59th IEEE Conference on Decision and Control (CDC).

[4]  Tom Goldstein,et al.  Improving the Tightness of Convex Relaxation Bounds for Training Certifiably Robust Classifiers , 2020, ArXiv.

[5]  M. Pawan Kumar,et al.  Neural Network Branching for Neural Network Verification , 2019, ICLR.

[6]  Dahua Lin,et al.  Fastened CROWN: Tightened Neural Network Robustness Certificates , 2019, AAAI.

[7]  Pushmeet Kohli,et al.  Branch and Bound for Piecewise Linear Neural Network Verification , 2019, J. Mach. Learn. Res..

[8]  Mykel J. Kochenderfer,et al.  Algorithms for Verifying Deep Neural Networks , 2019, Found. Trends Optim..

[9]  Cho-Jui Hsieh,et al.  A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks , 2019, NeurIPS.

[10]  Timon Gehr,et al.  An abstract domain for certifying neural networks , 2019, Proc. ACM Program. Lang..

[11]  Christian Tjandraatmadja,et al.  Strong mixed-integer programming formulations for trained neural networks , 2018, Mathematical Programming.

[12]  Aditi Raghunathan,et al.  Semidefinite relaxations for certifying robustness to adversarial examples , 2018, NeurIPS.

[13]  Cho-Jui Hsieh,et al.  Efficient Neural Network Robustness Certification with General Activation Functions , 2018, NeurIPS.

[14]  S. Onn,et al.  Integer Programming , 2018, Optimizations and Programming.

[15]  Timon Gehr,et al.  Boosting Robustness Certification of Neural Networks , 2018, ICLR.

[16]  Junfeng Yang,et al.  Efficient Formal Safety Analysis of Neural Networks , 2018, NeurIPS.

[17]  Aleksander Madry,et al.  Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability , 2018, ICLR.

[18]  J. Zico Kolter,et al.  Scaling provable adversarial defenses , 2018, NeurIPS.

[19]  Inderjit S. Dhillon,et al.  Towards Fast Computation of Certified Robustness for ReLU Networks , 2018, ICML.

[20]  Ashish Tiwari,et al.  Output Range Analysis for Deep Feedforward Neural Networks , 2018, NFM.

[21]  Pushmeet Kohli,et al.  A Dual Approach to Scalable Verification of Deep Networks , 2018, UAI.

[22]  Matteo Fischetti,et al.  Deep Neural Networks as 0-1 Mixed Integer Linear Programs: A Feasibility Study , 2017, ArXiv.

[23]  Russ Tedrake,et al.  Verifying Neural Networks with Mixed Integer Programming , 2017, ArXiv.

[24]  J. Zico Kolter,et al.  Provable defenses against adversarial examples via the convex outer adversarial polytope , 2017, ICML.

[25]  Leonid Ryzhyk,et al.  Verifying Properties of Binarized Deep Neural Networks , 2017, AAAI.

[26]  Weiming Xiang,et al.  Output Reachable Set Estimation and Verification for Multilayer Neural Networks , 2017, IEEE Transactions on Neural Networks and Learning Systems.

[27]  Alessio Lomuscio,et al.  An approach to reachability analysis for feed-forward ReLU neural networks , 2017, ArXiv.

[28]  Rüdiger Ehlers,et al.  Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks , 2017, ATVA.

[29]  Chih-Hong Cheng,et al.  Maximum Resilience of Artificial Neural Networks , 2017, ATVA.

[30]  Mykel J. Kochenderfer,et al.  Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks , 2017, CAV.

[31]  Min Wu,et al.  Safety Verification of Deep Neural Networks , 2016, CAV.

[32]  David A. Wagner,et al.  Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).

[33]  Ananthram Swami,et al.  The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).

[34]  Joan Bruna,et al.  Intriguing properties of neural networks , 2013, ICLR.

[35]  Francis R. Bach,et al.  Learning with Submodular Functions: A Convex Optimization Perspective , 2011, Found. Trends Mach. Learn..

[36]  Alper Atamtürk,et al.  Maximizing a class of submodular utility functions , 2011, Math. Program..

[37]  Jean-Philippe P. Richard,et al.  Explicit convex and concave envelopes through polyhedral subdivisions , 2010, Mathematical Programming.

[38]  B. Korte,et al.  Combinatorial Optimization: Theory and Algorithms , 2007 .

[39]  M. Todd The Computation of Fixed Points and Applications , 1976 .

[40]  Patrick O'Neil,et al.  Hyperplane cuts of an n-cube , 1971, Discret. Math..

[41]  Martin Vechev,et al.  Beyond the Single Neuron Convex Barrier for Neural Network Certification , 2019, NeurIPS.

[42]  Matthew Mirman,et al.  Fast and Effective Robustness Certification , 2018, NeurIPS.

[43]  Xiaowei Huang Safety Verification of Deep Neural Networks ∗ , 2017 .

[44]  J. G. Pierce,et al.  Geometric Algorithms and Combinatorial Optimization , 2016 .

[45]  Bernd Becker,et al.  Towards Verification of Artificial Neural Networks , 2015, MBMV.

[46]  John N. Tsitsiklis,et al.  Introduction to linear optimization , 1997, Athena scientific optimization and computation series.

[47]  M. Anthony Discrete Mathematics of Neural Networks: Selected Topics , 1987 .