Beta-CROWN: Efficient Bound Propagation with Per-neuron Split Constraints for Neural Network Robustness Verification

Bound propagation based incomplete neural network verifiers such as CROWN are very efficient and can significantly accelerate branch-and-bound (BaB) based complete verification of neural networks. However, bound propagation cannot fully handle the neuron split constraints introduced by BaB commonly handled by expensive linear programming (LP) solvers, leading to loose bounds and hurting verification efficiency. In this work, we develop β-CROWN, a new bound propagation based method that can fully encode neuron splits via optimizable parameters β constructed from either primal or dual space. When jointly optimized in intermediate layers, β-CROWN generally produces better bounds than typical LP verifiers with neuron split constraints, while being as efficient and parallelizable as CROWN on GPUs. Applied to complete robustness verification benchmarks, β-CROWN with BaB is up to three orders of magnitude faster than LP-based BaB methods, and is notably faster than all existing approaches while producing lower timeout rates. By terminating BaB early, our method can also be used for efficient incomplete verification. We consistently achieve higher verified accuracy in many settings compared to powerful incomplete verifiers, including those based on convex barrier breaking techniques. Compared to the typically tightest but very costly semidefinite programming (SDP) based incomplete verifiers, we obtain higher verified accuracy with three orders of magnitudes less verification time. Our algorithm empowered the α,β-CROWN (alpha-beta-CROWN) verifier, the winning tool in VNN-COMP 2021. Our code is available at http://PaperCode.cc/BetaCROWN.

[1]  Yizheng Chen,et al.  MixTrain: Scalable Training of Formally Robust Neural Networks , 2018, ArXiv.

[2]  J. Zico Kolter,et al.  Scaling provable adversarial defenses , 2018, NeurIPS.

[3]  Isil Dillig,et al.  Optimization and abstraction: a synergistic approach for analyzing neural network robustness , 2019, PLDI.

[4]  Matthew Mirman,et al.  Differentiable Abstract Interpretation for Provably Robust Neural Networks , 2018, ICML.

[5]  Pushmeet Kohli,et al.  Efficient Neural Network Verification with Exactness Characterization , 2019, UAI.

[6]  Cho-Jui Hsieh,et al.  A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks , 2019, NeurIPS.

[7]  Cho-Jui Hsieh,et al.  Efficient Neural Network Robustness Certification with General Activation Functions , 2018, NeurIPS.

[8]  Pushmeet Kohli,et al.  Improved Branch and Bound for Neural Network Verification via Lagrangian Decomposition , 2021, ArXiv.

[9]  Cho-Jui Hsieh,et al.  Towards Stable and Efficient Training of Verifiably Robust Neural Networks , 2019, ICLR.

[10]  Timon Gehr,et al.  An abstract domain for certifying neural networks , 2019, Proc. ACM Program. Lang..

[11]  Manfred Morari,et al.  Safety Verification and Robustness Analysis of Neural Networks via Quadratic Constraints and Semidefinite Programming , 2019, ArXiv.

[12]  Russ Tedrake,et al.  Evaluating Robustness of Neural Networks with Mixed Integer Programming , 2017, ICLR.

[13]  Jinfeng Yi,et al.  Fast Certified Robust Training via Better Initialization and Shorter Warmup , 2021, ArXiv.

[14]  Taylor Johnson,et al.  The Second International Verification of Neural Networks Competition (VNN-COMP 2021): Summary and Results , 2021, ArXiv.

[15]  Ashish Tiwari,et al.  Output Range Analysis for Deep Feedforward Neural Networks , 2018, NFM.

[16]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[17]  Junfeng Yang,et al.  Efficient Formal Safety Analysis of Neural Networks , 2018, NeurIPS.

[18]  Harkirat Singh Behl,et al.  Scaling the Convex Barrier with Active Sets , 2021, ICLR.

[19]  Mykel J. Kochenderfer,et al.  Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks , 2017, CAV.

[20]  Timon Gehr,et al.  Boosting Robustness Certification of Neural Networks , 2018, ICLR.

[21]  Timothy A. Mann,et al.  On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models , 2018, ArXiv.

[22]  Pushmeet Kohli,et al.  A Unified View of Piecewise Linear Neural Network Verification , 2017, NeurIPS.

[23]  Panagiotis Kouvaros,et al.  Efficient Verification of ReLU-Based Neural Networks via Dependency Analysis , 2020, AAAI.

[24]  Pushmeet Kohli,et al.  Branch and Bound for Piecewise Linear Neural Network Verification , 2020, J. Mach. Learn. Res..

[25]  Junfeng Yang,et al.  Formal Security Analysis of Neural Networks using Symbolic Intervals , 2018, USENIX Security Symposium.

[26]  Cho-Jui Hsieh,et al.  Fast and Complete: Enabling Complete Neural Network Verification with Rapid and Massively Parallel Incomplete Verifiers , 2021, ICLR.

[27]  Min Wu,et al.  Safety Verification of Deep Neural Networks , 2016, CAV.

[28]  Martin Vechev,et al.  Beyond the Single Neuron Convex Barrier for Neural Network Certification , 2019, NeurIPS.

[29]  J. Zico Kolter,et al.  Provable defenses against adversarial examples via the convex outer adversarial polytope , 2017, ICML.

[30]  Gagandeep Singh,et al.  Precise Multi-Neuron Abstractions for Neural Network Certification , 2021, ArXiv.

[31]  M. Pawan Kumar,et al.  Neural Network Branching for Neural Network Verification , 2019, ICLR.

[32]  Cho-Jui Hsieh,et al.  Robustness Verification for Transformers , 2020, ICLR.

[33]  Juan Pablo Vielma,et al.  The Convex Relaxation Barrier, Revisited: Tightened Single-Neuron Relaxations for Neural Network Verification , 2020, NeurIPS.

[34]  Mislav Balunovic,et al.  Adversarial Training and Provable Defenses: Bridging the Gap , 2020, ICLR.

[35]  Aditi Raghunathan,et al.  Semidefinite relaxations for certifying robustness to adversarial examples , 2018, NeurIPS.

[36]  Girish Chowdhary,et al.  Robust Deep Reinforcement Learning with Adversarial Attacks , 2017, AAMAS.

[37]  Pushmeet Kohli,et al.  Lagrangian Decomposition for Neural Network Verification , 2020, UAI.

[38]  Rüdiger Ehlers,et al.  Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks , 2017, ATVA.

[39]  Swarat Chaudhuri,et al.  AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation , 2018, 2018 IEEE Symposium on Security and Privacy (SP).

[40]  Pushmeet Kohli,et al.  A Dual Approach to Scalable Verification of Deep Networks , 2018, UAI.

[41]  Christian Tjandraatmadja,et al.  Strong convex relaxations and mixed-integer programming formulations for trained neural networks , 2018 .

[42]  Huan Zhang,et al.  Robust Reinforcement Learning on State Observations with Learned Optimal Adversary , 2021, ICLR.

[43]  Matthew Mirman,et al.  Fast and Effective Robustness Certification , 2018, NeurIPS.

[44]  Ian Goodfellow,et al.  Enabling certification of verification-agnostic networks via memory-efficient semidefinite programming , 2020, NeurIPS.

[45]  Cho-Jui Hsieh,et al.  Robust Deep Reinforcement Learning against Adversarial Perturbations on State Observations , 2020, NeurIPS.

[46]  Dusan M. Stipanovic,et al.  Fast Neural Network Verification via Shadow Prices , 2019, ArXiv.

[47]  Cho-Jui Hsieh,et al.  Automatic Perturbation Analysis for Scalable Certified Robustness and Beyond , 2020, NeurIPS.

[48]  Aleksander Madry,et al.  Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.