Neural state classification for hybrid systems

Model checking of hybrid systems is usually expressed in terms of the following reachability problem for hybrid automata (HA) [6]: given an HA M, a set of initial states I, and a set of unsafe states U, determine whether there exists a trajectory of M starting in an initial state and ending in an unsafe state. The time-bounded version of this problem considers trajectories that are within a given time bound T. We introduce the State Classification Problem (SCP), a generalization of the model checking problem for hybrid systems. Let B = {0,1} be the set of Boolean values. Given an HA M with state space S(M), time bound T, and set of unsafe states U ⊂ S(M), the SCP problem is to find a function F*: S(M) → B such that for all s ∈ S(M), F* (s) = 1 if M |= Reach (U ,s,T), i.e., if it is possible for M, starting in s, to reach a state in U within time T ; F*(s) = 0 otherwise. A state s ⊂ S(M) is called positive if F*(s) = 1. Otherwise, s is negative. We call such a function a state classifier. State classification is also useful in at least two other contexts. First, due to random disturbances, a hybrid system may restart in a random state outside the initial region, and we may wish to check the system's safety from that state. Secondly, a classifier can be used for online model checking [10], where in the process of monitoring a system's behavior, one would like to determine, in real-time, the fate of the system going forward from the current (non-initial) state. This paper shows how deep neural networks (DNNs) can be used for state classification, an approach we refer to as Neural State Classification (NSC). An NSC classifier is subject to false positives (FPs) and, more importantly, false negatives (FNs). An FP occurs when a state s is deemed positive when it is actually negative, and, likewise, an FN occurs when s is deemed negative when it is actually positive. A well-trained NSC classifier offers high accuracy, runs in constant time (approx. 1 ms in our experiments), and takes constant space (e.g., a DNN with l hidden layers and n neurons only requires functions of dimension l · n for its encoding). This makes NSC classifiers very appealing for applications such as online model checking, a type of analysis subject to strict time and space constraints. Our approach can also classify states of parametric HA by encoding each parameter as an additional input to the classifier. This makes NSC more versatile than state-of-the-art hybrid system reachability tools, which provide little or no support for parametric analysis [3,4]. The NSC method is summarized in Figure 1. We train the state classifier using supervised learning, where the training examples are derived by sampling the state and parameter spaces according to some distribution. Reachability values for the examples are computed by invoking an oracle, i.e., an hybrid system model checker [4] or a simulator when the system is deterministic. We evaluate a trained state classifier by estimating its accuracy, false-positive rate, and false-negative rate (together with their confidence intervals) on a test dataset of fresh samples. This allows us to quantify how well the classifier extrapolates to unseen states, i.e., the probability that it correctly predicts reachability for any state. Inspired by statistical model checking [8], we also provide statistical guarantees through sequential hypothesis testing to certify (up to some confidence level) that the classifier meets prescribed accuracy levels on unseen data. We also consider two tuning methods that can reduce and virtually eliminate false negatives: a new method called falsification-guided adaptation that iteratively re-trains the classifier with false negatives found through adversarial sampling; and threshold selection, which adjusts the NN's classification threshold to favor FPs over FNs. We have applied NSC to six nonlinear hybrid system benchmarks, achieving an accuracy of 99.25% to 99.98%, and a false-negative rate of 0.0033 to 0, which we further reduced to 0.0015 to 0 after tuning the classifier. We believe that this level of accuracy is acceptable in many practical applications, and that these results demonstrate the promise of the NSC approach. In the rest of this extended abstract, we provide more details about the NSC approach and discuss experimental results.

[1]  Mykel J. Kochenderfer,et al.  Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks , 2017, CAV.

[2]  Luca Bortolussi,et al.  Bayesian Statistical Parameter Synthesis for Linear Temporal Properties of Stochastic Models , 2018, TACAS.

[3]  Sergiy Bogomolov,et al.  Hybrid automata: from verification to implementation , 2017, International Journal on Software Tools for Technology Transfer.

[4]  Stanley Bak,et al.  Rigorous Simulation-Based Analysis of Linear Hybrid Systems , 2017, TACAS.

[5]  Edmund M. Clarke,et al.  Counterexample-Guided Abstraction Refinement , 2000, CAV.

[6]  Vladimir N. Vapnik,et al.  The Nature of Statistical Learning Theory , 2000, Statistics for Engineering and Information Science.

[7]  Sriram Sankaranarayanan,et al.  S-TaLiRo: A Tool for Temporal Logic Falsification for Hybrid Systems , 2011, TACAS.

[8]  Dan Roth,et al.  Learning invariants using decision trees and implication counterexamples , 2016, POPL.

[9]  M. Simonovits,et al.  Random walks and an O * ( n 5 ) volume algorithm for convex bodies , 1997 .

[10]  Pravin Varaiya,et al.  What's decidable about hybrid automata? , 1995, STOC '95.

[11]  Weiming Xiang,et al.  Output Reachable Set Estimation and Verification for Multilayer Neural Networks , 2017, IEEE Transactions on Neural Networks and Learning Systems.

[12]  Kenneth R. Butts,et al.  Powertrain control verification benchmark , 2014, HSCC.

[13]  Edmund M. Clarke,et al.  dReal: An SMT Solver for Nonlinear Theories over the Reals , 2013, CADE.

[14]  Antoine Girard,et al.  SpaceEx: Scalable Verification of Hybrid Systems , 2011, CAV.

[15]  Krishnendu Chatterjee,et al.  Verification of Markov Decision Processes Using Learning Algorithms , 2014, ATVA.

[16]  Geoffrey E. Hinton,et al.  Deep Learning , 2015, Nature.

[17]  Rüdiger Ehlers,et al.  Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks , 2017, ATVA.

[18]  Melanie Mitchell,et al.  An introduction to genetic algorithms , 1996 .

[19]  Axel Legay,et al.  Statistical Model Checking: An Overview , 2010, RV.

[20]  Alexander Aiken,et al.  A Data Driven Approach for Algebraic Loop Invariants , 2013, ESOP.

[21]  Ezio Bartocci,et al.  Data-Driven Statistical Learning of Temporal Logic Properties , 2014, FORMATS.

[22]  Min Wu,et al.  Safety Verification of Deep Neural Networks , 2016, CAV.

[23]  Koushik Sen,et al.  Online efficient predictive safety analysis of multithreaded programs , 2005, International Journal on Software Tools for Technology Transfer.

[24]  Joël Ouaknine,et al.  On Reachability for Hybrid Automata over Bounded Time , 2011, ICALP.

[25]  Kurt Hornik,et al.  Multilayer feedforward networks are universal approximators , 1989, Neural Networks.

[26]  Sanjit A. Seshia,et al.  Compositional Falsification of Cyber-Physical Systems with Machine Learning Components , 2017, NFM.

[27]  Marta Z. Kwiatkowska,et al.  Stochastic Model Checking , 2007, SFM.

[28]  Xin Chen,et al.  A Benchmark Suite for Hybrid Systems Reachability Analysis , 2015, NFM.

[29]  Radu Grosu,et al.  Neural State Classification for Hybrid Systems , 2018, ATVA.