Improved Geometric Path Enumeration for Verifying ReLU Neural Networks

Neural networks provide quick approximations to complex functions, and have been increasingly used in perception as well as control tasks. For use in mission-critical and safety-critical applications, however, it is important to be able to analyze what a neural network can and cannot do. For feed-forward neural networks with ReLU activation functions, although exact analysis is NP-complete, recently-proposed verification methods can sometimes succeed. The main practical problem with neural network verification is excessive analysis runtime. Even on small networks, tools that are theoretically complete can sometimes run for days without producing a result. In this paper, we work to address the runtime problem by improving upon a recently-proposed geometric path enumeration method. Through a series of optimizations, several of which are new algorithmic improvements, we demonstrate significant speed improvement of exact analysis on the well-studied ACAS Xu benchmarks, sometimes hundreds of times faster than the original implementation. On more difficult benchmark instances, our optimized approach is often the fastest, even outperforming inexact methods that leverage overapproximation and refinement.

[1]  Weiming Xiang,et al.  Reachable Set Computation and Safety Verification for Neural Networks with ReLU Activations , 2017, ArXiv.

[2]  Mahesh Viswanathan,et al.  Parsimonious, Simulation Based Verification of Linear Systems , 2016, CAV.

[3]  Luca Daniel,et al.  Towards Verifying Robustness of Neural Networks Against Semantic Perturbations , 2019, ArXiv.

[4]  Swarat Chaudhuri,et al.  AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation , 2018, 2018 IEEE Symposium on Security and Privacy (SP).

[5]  Weiming Xiang,et al.  Output Reachable Set Estimation and Verification for Multilayer Neural Networks , 2017, IEEE Transactions on Neural Networks and Learning Systems.

[6]  Mykel J. Kochenderfer,et al.  Policy compression for aircraft collision avoidance systems , 2016, 2016 IEEE/AIAA 35th Digital Avionics Systems Conference (DASC).

[7]  Mykel J. Kochenderfer,et al.  Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks , 2017, CAV.

[8]  Mykel J. Kochenderfer,et al.  The Marabou Framework for Verification and Analysis of Deep Neural Networks , 2019, CAV.

[9]  Rose F. Gamble A Perspective on Formal Verification , 1993 .

[10]  Ashish Tiwari,et al.  Output Range Analysis for Deep Feedforward Neural Networks , 2018, NFM.

[11]  Dusan M. Stipanovic,et al.  Fast Neural Network Verification via Shadow Prices , 2019, ArXiv.

[12]  Weiming Xiang,et al.  NNV: The Neural Network Verification Tool for Deep Neural Networks and Learning-Enabled Cyber-Physical Systems , 2020, CAV.

[13]  Min Wu,et al.  Safety Verification of Deep Neural Networks , 2016, CAV.

[14]  Mike Marston,et al.  ACAS-Xu Initial Self-Separation Flight Tests , 2015 .

[15]  Matthew Mirman,et al.  Fast and Effective Robustness Certification , 2018, NeurIPS.

[16]  Clark W. Barrett,et al.  The SMT-LIB Standard Version 2.0 , 2010 .

[17]  Manfred Morari,et al.  Multi-Parametric Toolbox 3.0 , 2013, 2013 European Control Conference (ECC).

[18]  Junfeng Yang,et al.  Efficient Formal Safety Analysis of Neural Networks , 2018, NeurIPS.

[19]  Mykel J. Kochenderfer,et al.  Algorithms for Verifying Deep Neural Networks , 2019, Found. Trends Optim..

[20]  P. Cochat,et al.  Et al , 2008, Archives de pediatrie : organe officiel de la Societe francaise de pediatrie.

[21]  Antoine Girard,et al.  Reachability of Uncertain Linear Systems Using Zonotopes , 2005, HSCC.

[22]  Junfeng Yang,et al.  Formal Security Analysis of Neural Networks using Symbolic Intervals , 2018, USENIX Security Symposium.

[23]  Weiming Xiang,et al.  Verification for Machine Learning, Autonomy, and Neural Networks Survey , 2018, ArXiv.

[24]  Matthias Althoff,et al.  ARCH-COMP19 Category Report: Continuous and Hybrid Systems with Linear Continuous Dynamics , 2019, ARCH@CPSIoTWeek.

[25]  Timon Gehr,et al.  Boosting Robustness Certification of Neural Networks , 2018, ICLR.

[26]  Russ Tedrake,et al.  Evaluating Robustness of Neural Networks with Mixed Integer Programming , 2017, ICLR.

[27]  Weiming Xiang,et al.  Star-Based Reachability Analysis of Deep Neural Networks , 2019, FM.

[28]  Rüdiger Ehlers,et al.  Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks , 2017, ATVA.

[29]  Jonathon Shlens,et al.  Explaining and Harnessing Adversarial Examples , 2014, ICLR.

[30]  Alessio Lomuscio,et al.  An approach to reachability analysis for feed-forward ReLU neural networks , 2017, ArXiv.