Divide and Slide: Layer-Wise Refinement for Output Range Analysis of Deep Neural Networks

In this article, we present a layer-wise refinement method for neural network output range analysis. While approaches such as nonlinear programming (NLP) can directly model the high nonlinearity brought by neural networks in output range analysis, they are known to be difficult to solve in general. We propose to use a convex polygonal relaxation (overapproximation) of the activation functions to cope with the nonlinearity. This allows us to encode the relaxed problem into a mixed-integer linear program (MILP), and control the tightness of the relaxation by adjusting the number of segments in the polygon. Starting with a segment number of 1 for each neuron, which coincides with a linear programming (LP) relaxation, our approach selects neurons layer by layer to iteratively refine this relaxation. To tackle the increase of the number of integer variables with tighter refinement, we bridge the propagation-based method and the programming-based method by dividing and sliding the layer-wise constraints. Specifically, given a sliding number <inline-formula> <tex-math notation="LaTeX">$s$ </tex-math></inline-formula>, for the neurons in layer <inline-formula> <tex-math notation="LaTeX">$l$ </tex-math></inline-formula>, we only encode the constraints of the layers between <inline-formula> <tex-math notation="LaTeX">$l-s$ </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">$l$ </tex-math></inline-formula>. We show that our overall framework is sound and provides a valid overapproximation. Experiments on deep neural networks demonstrate significant improvement on output range analysis precision using our approach compared to the state-of-the-art.

[1]  Russ Tedrake,et al.  Evaluating Robustness of Neural Networks with Mixed Integer Programming , 2017, ICLR.

[2]  Weiming Xiang,et al.  Reachable Set Computation and Safety Verification for Neural Networks with ReLU Activations , 2017, ArXiv.

[3]  Liqian Chen,et al.  Analyzing Deep Neural Networks with Symbolic Propagation: Towards Higher Precision and Faster Verification , 2019, SAS.

[4]  Joan Bruna,et al.  Intriguing properties of neural networks , 2013, ICLR.

[5]  Xiaowei Huang,et al.  Reachability Analysis of Deep Neural Networks with Provable Guarantees , 2018, IJCAI.

[6]  Pushmeet Kohli,et al.  A Dual Approach to Scalable Verification of Deep Networks , 2018, UAI.

[7]  Weiming Xiang,et al.  NNV: The Neural Network Verification Tool for Deep Neural Networks and Learning-Enabled Cyber-Physical Systems , 2020, CAV.

[8]  Pushmeet Kohli,et al.  Lagrangian Decomposition for Neural Network Verification , 2020, UAI.

[9]  Manfred Morari,et al.  Safety Verification and Robustness Analysis of Neural Networks via Quadratic Constraints and Semidefinite Programming , 2019, ArXiv.

[10]  Mislav Balunovic,et al.  Certifying Geometric Robustness of Neural Networks , 2019, NeurIPS.

[11]  Matteo Fischetti,et al.  Deep Neural Networks as 0-1 Mixed Integer Linear Programs: A Feasibility Study , 2017, ArXiv.

[12]  Min Wu,et al.  Safety Verification of Deep Neural Networks , 2016, CAV.

[13]  Rüdiger Ehlers,et al.  Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks , 2017, ATVA.

[14]  Alessio Lomuscio,et al.  An approach to reachability analysis for feed-forward ReLU neural networks , 2017, ArXiv.

[15]  Aditi Raghunathan,et al.  Semidefinite relaxations for certifying robustness to adversarial examples , 2018, NeurIPS.

[16]  Jonathon Shlens,et al.  Explaining and Harnessing Adversarial Examples , 2014, ICLR.

[17]  Mislav Balunovic,et al.  Adversarial Training and Provable Defenses: Bridging the Gap , 2020, ICLR.

[18]  Timon Gehr,et al.  Boosting Robustness Certification of Neural Networks , 2018, ICLR.

[19]  Junfeng Yang,et al.  Formal Security Analysis of Neural Networks using Symbolic Intervals , 2018, USENIX Security Symposium.

[20]  Antonio Criminisi,et al.  Measuring Neural Net Robustness with Constraints , 2016, NIPS.

[21]  Jiameng Fan,et al.  ReachNN*: A Tool for Reachability Analysis of Neural-Network Controlled Systems , 2020, ATVA.

[22]  Chih-Hong Cheng,et al.  Maximum Resilience of Artificial Neural Networks , 2017, ATVA.

[23]  Insup Lee,et al.  Verisig: verifying safety properties of hybrid systems with neural network controllers , 2018, HSCC.

[24]  Mykel J. Kochenderfer,et al.  Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks , 2017, CAV.

[25]  Weiming Xiang,et al.  Verification of Deep Convolutional Neural Networks Using ImageStars , 2020, CAV.

[26]  Sriram Sankaranarayanan,et al.  Reachability analysis for neural feedback systems using regressive polynomial rule inference , 2019, HSCC.

[27]  Jiameng Fan,et al.  ReachNN , 2019, ACM Trans. Embed. Comput. Syst..

[28]  Matthew Mirman,et al.  Fast and Effective Robustness Certification , 2018, NeurIPS.

[29]  Ashish Tiwari,et al.  Output Range Analysis for Deep Feedforward Neural Networks , 2018, NFM.

[30]  Cho-Jui Hsieh,et al.  RecurJac: An Efficient Recursive Algorithm for Bounding Jacobian Matrix of Neural Networks and Its Applications , 2018, AAAI.

[31]  Swarat Chaudhuri,et al.  AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation , 2018, 2018 IEEE Symposium on Security and Privacy (SP).

[32]  Ah Chung Tsoi,et al.  Face recognition: a convolutional neural-network approach , 1997, IEEE Trans. Neural Networks.

[33]  Weiming Xiang,et al.  Reachability Analysis and Safety Verification for Neural Network Control Systems , 2018, ArXiv.

[34]  J. Zico Kolter,et al.  Provable defenses against adversarial examples via the convex outer adversarial polytope , 2017, ICML.

[35]  Zahra Rahimi Afzal,et al.  Abstraction based Output Range Analysis for Neural Networks , 2020, NeurIPS.