OVERT: An Algorithm for Safety Verification of Neural Network Control Policies for Nonlinear Systems

Deep learning methods can be used to produce control policies, but certifying their safety is challenging. The resulting networks are nonlinear and often very large. In response to this challenge, we present OVERT: a sound algorithm for safety verification of nonlinear discrete-time closed loop dynamical systems with neural network control policies. The novelty of OVERT lies in combining ideas from the classical formal methods literature with ideas from the newer neural network verification literature. The central concept of OVERT is to abstract nonlinear functions with a set of optimally tight piecewise linear bounds. Such piecewise linear bounds are designed for seamless integration into ReLU neural network verification tools. OVERT can be used to prove bounded-time safety properties by either computing reachable sets or solving feasibility queries directly. We demonstrate various examples of safety verification for several classical benchmark examples. OVERT compares favorably to existing methods both in computation time and in tightness of the reachable set.

[1]  Sriram Sankaranarayanan,et al.  Reachability analysis for neural feedback systems using regressive polynomial rule inference , 2019, HSCC.

[2]  Mykel J. Kochenderfer,et al.  Algorithms for Verifying Deep Neural Networks , 2019, Found. Trends Optim..

[3]  Rajesh Rajamani,et al.  Vehicle dynamics and control , 2005 .

[4]  Alberto Griggio,et al.  Incremental Linearization for Satisfiability and Verification Modulo Nonlinear Arithmetic and Transcendental Functions , 2018, ACM Trans. Comput. Log..

[5]  Alessio Lomuscio,et al.  Reachability Analysis for Neural Agent-Environment Systems , 2018, KR.

[6]  K. Bowers,et al.  Computation and Control , 1989 .

[7]  Demis Hassabis,et al.  Mastering the game of Go with deep neural networks and tree search , 2016, Nature.

[8]  Xin Zhang,et al.  End to End Learning for Self-Driving Cars , 2016, ArXiv.

[9]  Jiameng Fan,et al.  ReachNN , 2019, ACM Trans. Embed. Comput. Syst..

[10]  A. Neumaier The Wrapping Effect, Ellipsoid Arithmetic, Stability and Confidence Regions , 1993 .

[11]  Weiming Xiang,et al.  Reachable Set Estimation and Safety Verification for Piecewise Linear Systems with Neural Network Controllers , 2018, 2018 Annual American Control Conference (ACC).

[12]  Ananthram Swami,et al.  The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).

[13]  Mykel J. Kochenderfer,et al.  A Survey of Algorithms for Black-Box Safety Validation , 2020, J. Artif. Intell. Res..

[14]  ARCH-COMP20 Category Report: Artificial Intelligence and Neural Network Control Systems (AINNCS) for Continuous and Hybrid Systems Plants , 2020, ARCH.

[15]  Junfeng Yang,et al.  Formal Security Analysis of Neural Networks using Symbolic Intervals , 2018, USENIX Security Symposium.

[16]  Francesco Borrelli,et al.  Kinematic and dynamic vehicle models for autonomous driving control design , 2015, 2015 IEEE Intelligent Vehicles Symposium (IV).

[17]  Mykel J. Kochenderfer,et al.  Reachability Analysis for Neural Network Aircraft Collision Avoidance Systems , 2021 .

[18]  Weiming Xiang,et al.  Output Reachable Set Estimation and Verification for Multilayer Neural Networks , 2017, IEEE Transactions on Neural Networks and Learning Systems.

[19]  Russ Tedrake,et al.  Evaluating Robustness of Neural Networks with Mixed Integer Programming , 2017, ICLR.

[20]  Panagiotis Kouvaros,et al.  Formal verification of neural agents in non-deterministic environments , 2021, Autonomous Agents and Multi-Agent Systems.

[21]  Mrdjan Jankovic,et al.  TORA example: cascade- and passivity-based control designs , 1996, IEEE Trans. Control. Syst. Technol..

[22]  Ashish Tiwari,et al.  Learning and Verification of Feedback Control Systems using Feedforward Neural Networks , 2018, ADHS.

[23]  Shane Legg,et al.  Human-level control through deep reinforcement learning , 2015, Nature.

[24]  Xin Chen,et al.  Flow*: An Analyzer for Non-linear Hybrid Systems , 2013, CAV.

[25]  Nina Narodytska,et al.  In Search for a SAT-friendly Binarized Neural Network Architecture , 2020, ICLR.

[26]  Mykel J. Kochenderfer,et al.  Policy compression for aircraft collision avoidance systems , 2016, 2016 IEEE/AIAA 35th Digital Avionics Systems Conference (DASC).

[27]  Weiming Xiang,et al.  Specification-Guided Safety Verification for Feedforward Neural Networks , 2018, ArXiv.

[28]  Insup Lee,et al.  Verisig: verifying safety properties of hybrid systems with neural network controllers , 2018, HSCC.

[29]  Thomas A. Henzinger,et al.  Handbook of Model Checking , 2018, Springer International Publishing.

[30]  Edmund M. Clarke,et al.  dReal: An SMT Solver for Nonlinear Theories over the Reals , 2013, CADE.

[31]  Kyle D. Julian,et al.  Parallelization Techniques for Verifying Neural Networks , 2020, 2020 Formal Methods in Computer Aided Design (FMCAD).

[32]  Jianxiong Xiao,et al.  DeepDriving: Learning Affordance for Direct Perception in Autonomous Driving , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).

[33]  Joan Bruna,et al.  Intriguing properties of neural networks , 2013, ICLR.

[34]  Sanjay Modgil,et al.  Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS , 2016, AAMAS 2016.