Formal verification of neural network controlled autonomous systems

In this paper, we consider the problem of formally verifying the safety of an autonomous robot equipped with a Neural Network (NN) controller that processes LiDAR images to produce control actions. Given a workspace that is characterized by a set of polytopic obstacles, our objective is to compute the set of safe initial states such that a robot trajectory starting from these initial states is guaranteed to avoid the obstacles. Our approach is to construct a finite state abstraction of the system and use standard reachability analysis over the finite state abstraction to compute the set of safe initial states. To mathematically model the imaging function, that maps the robot position to the LiDAR image, we introduce the notion of imaging-adapted partitions of the workspace in which the imaging function is guaranteed to be affine. Given this workspace partitioning, a discrete-time linear dynamics of the robot, and a pre-trained NN controller with Rectified Linear Unit (ReLU) non-linearity, we utilize a Satisfiability Modulo Convex (SMC) encoding to enumerate all the possible assignments of different ReLUs. To accelerate this process, we develop a pre-processing algorithm that could rapidly prune the space of feasible ReLU assignments. Finally, we demonstrate the efficiency of the proposed algorithms using numerical simulations with the increasing complexity of the neural network controller.

[1]  Sanjit A. Seshia,et al.  Formal Specification for Deep Neural Networks , 2018, ATVA.

[2]  Percy Liang,et al.  Certified Defenses for Data Poisoning Attacks , 2017, NIPS.

[3]  Antoine Girard,et al.  SpaceEx: Scalable Verification of Hybrid Systems , 2011, CAV.

[4]  Daniel Kroening,et al.  Concolic Testing for Deep Neural Networks , 2018, 2018 33rd IEEE/ACM International Conference on Automated Software Engineering (ASE).

[5]  Sarfraz Khurshid,et al.  DeepRoad: GAN-based Metamorphic Autonomous Driving System Testing , 2018, ArXiv.

[6]  Luca Pulina,et al.  An Abstraction-Refinement Approach to Verification of Artificial Neural Networks , 2010, CAV.

[7]  Lei Ma,et al.  DeepGauge: Multi-Granularity Testing Criteria for Deep Learning Systems , 2018, 2018 33rd IEEE/ACM International Conference on Automated Software Engineering (ASE).

[8]  Sergey Levine,et al.  PLATO: Policy learning using adaptive trajectory optimization , 2016, 2017 IEEE International Conference on Robotics and Automation (ICRA).

[9]  Daniel Kroening,et al.  Global Robustness Evaluation of Deep Neural Networks with Provable Guarantees for the Hamming Distance , 2019, IJCAI.

[10]  Matthew Wicker,et al.  Feature-Guided Black-Box Safety Testing of Deep Neural Networks , 2017, TACAS.

[11]  Gregory Valiant,et al.  Learning from untrusted data , 2016, STOC.

[12]  Daniel Kroening,et al.  Testing Deep Neural Networks , 2018, ArXiv.

[13]  Weiming Xiang,et al.  Reachability Analysis and Safety Verification for Neural Network Control Systems , 2018, ArXiv.

[14]  Suman Jana,et al.  DeepTest: Automated Testing of Deep-Neural-Network-Driven Autonomous Cars , 2017, 2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE).

[15]  Walid Saad,et al.  Robust Deep Reinforcement Learning for Security and Safety in Autonomous Vehicle Systems , 2018, 2018 21st International Conference on Intelligent Transportation Systems (ITSC).

[16]  Paulo Tabuada,et al.  SMC: Satisfiability Modulo Convex Optimization , 2017, HSCC.

[17]  Paulo Tabuada,et al.  SMC: Satisfiability Modulo Convex Programming , 2018, Proceedings of the IEEE.

[18]  Xin Chen,et al.  Flow*: An Analyzer for Non-linear Hybrid Systems , 2013, CAV.

[19]  Weiming Xiang,et al.  Verification for Machine Learning, Autonomy, and Neural Networks Survey , 2018, ArXiv.

[20]  Jun Sun,et al.  Detecting Adversarial Samples for Deep Neural Networks through Mutation Testing , 2018, ArXiv.

[21]  Rüdiger Ehlers,et al.  Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks , 2017, ATVA.

[22]  Sanjit A. Seshia,et al.  Compositional Falsification of Cyber-Physical Systems with Machine Learning Components , 2017, NFM.

[23]  Lei Ma,et al.  DeepMutation: Mutation Testing of Deep Learning Systems , 2018, 2018 IEEE 29th International Symposium on Software Reliability Engineering (ISSRE).

[24]  Mykel J. Kochenderfer,et al.  Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks , 2017, CAV.

[25]  Swarat Chaudhuri,et al.  AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation , 2018, 2018 IEEE Symposium on Security and Privacy (SP).

[26]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[27]  Tim Kelly,et al.  Establishing Safety Criteria for Artificial Neural Networks , 2003, KES.

[28]  Russ Tedrake,et al.  Verifying Neural Networks with Mixed Integer Programming , 2017, ArXiv.

[29]  Alessio Lomuscio,et al.  Reachability Analysis for Neural Agent-Environment Systems , 2018, KR.

[30]  Xiaowei Huang,et al.  Reachability Analysis of Deep Neural Networks with Provable Guarantees , 2018, IJCAI.

[31]  Fabio Roli,et al.  Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization , 2017, AISec@CCS.

[32]  Luca Pulina,et al.  Automated Verification of Neural Networks: Advances, Challenges and Perspectives , 2018, ArXiv.

[33]  Bernd Becker,et al.  Towards Verification of Artificial Neural Networks , 2015, MBMV.

[34]  Luis Muñoz-González,et al.  Label Sanitization against Label Flipping Poisoning Attacks , 2018, Nemesis/UrbReas/SoGood/IWAISe/GDM@PKDD/ECML.

[35]  James Kapinski,et al.  INVITED: Reasoning about Safety of Learning-Enabled Components in Autonomous Cyber-physical Systems , 2018, 2018 55th ACM/ESDA/IEEE Design Automation Conference (DAC).

[36]  Georgios Fainekos,et al.  Simulation-based Adversarial Test Generation for Autonomous Vehicles with Machine Learning Components , 2018, 2018 IEEE Intelligent Vehicles Symposium (IV).

[37]  Daniel Kroening,et al.  Global Robustness Evaluation of Deep Neural Networks with Provable Guarantees for L0 Norm , 2018, ArXiv.

[38]  Marcus Hutter,et al.  AGI Safety Literature Review , 2018, IJCAI.

[39]  Gidon Ernst,et al.  Two-Layered Falsification of Hybrid Systems Guided by Monte Carlo Tree Search , 2018, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems.

[40]  Jyotirmoy V. Deshmukh,et al.  Reasoning about Safety of Learning-Enabled Components in Autonomous Cyber-physical Systems , 2018 .

[41]  Weiming Xiang,et al.  Reachable Set Computation and Safety Verification for Neural Networks with ReLU Activations , 2017, ArXiv.

[42]  Laurent Orseau,et al.  AI Safety Gridworlds , 2017, ArXiv.

[43]  Insup Lee,et al.  Verisig: verifying safety properties of hybrid systems with neural network controllers , 2018, HSCC.

[44]  Junfeng Yang,et al.  DeepXplore: Automated Whitebox Testing of Deep Learning Systems , 2017, SOSP.

[45]  Tao Xie,et al.  Multiple-Implementation Testing of Supervised Learning Software , 2016, AAAI Workshops.

[46]  Pushmeet Kohli,et al.  A Unified View of Piecewise Linear Neural Network Verification , 2017, NeurIPS.

[47]  Ashish Tiwari,et al.  Output Range Analysis for Deep Feedforward Neural Networks , 2018, NFM.

[48]  Jingyi Wang,et al.  Adversarial Sample Detection for Deep Neural Network through Model Mutation Testing , 2018, 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE).

[49]  Sanjit A. Seshia,et al.  Towards Verified Artificial Intelligence , 2016, ArXiv.

[50]  Lei Ma,et al.  DeepGauge: Comprehensive and Multi-Granularity Testing Criteria for Gauging the Robustness of Deep Learning Systems , 2018, ArXiv.