Verification of Continuous Time Recurrent Neural Networks (Benchmark Proposal)

This manuscript presents a description and implementation of two benchmark problems for continuous-time recurrent neural network (RNN) verification. The first problem deals with the approximation of a vector field for a fixed point attractor located at the origin, whereas the second problem deals with the system identification of a forced damped pendulum. While the verification of neural networks is complicated and often impenetrable to the majority of verification techniques, continuous-time RNNs represent a class of networks that may be accessible to reachability methods for nonlinear ordinary differential equations (ODEs) derived originally in biology and neuroscience. Thus, an understanding of the behavior of a RNN may be gained by simulating the nonlinear equations from a diverse set of initial conditions and inputs, or considering reachability analysis from a set of initial conditions. The verification of continuous-time RNNs is a research area that has received little attention and if the research community can achieve meaningful results in this domain, then this class of neural networks may prove to be a superior approach in solving complex problems compared to other network architectures. Category: Academic Difficulty: High 1 Context and Origins Artificial Neural Networks have demonstrated an effective and powerful ability to achieve success in numerous contexts, such as adaptive control [43], autonomous vehicles, evolutionary robotics, pattern recognition, image classification, and nonlinear system identification and control [38] [18]. Despite this success, there have been reservations about incorporating them into safety critical systems [23] due to their susceptibility to unexpected and errant behavior from a slight perturbation in their inputs and initial conditions [42] [37]. Typically, neural networks are viewed as "black boxes" since the underlying operation of the neuron activations is often indiscernible to the creators of the network [10]. In light of these challenges, there has been significant work towards obtaining formal guarantees about the behavior of neural networks [25]. However, the majority of verification schemes have only been able to deal with neural networks that make use of piecewise-linear activation functions [7]. This is due to the great difficulty exhibited in obtaining formal guarantees for even simple properties of neural networks. In fact, neural network verification has been demonstrated to be an NP-complete problem, and while G. Frehse (ed.), ARCH18 (EPiC Series in Computing, vol. 54), pp. 196–207 Verification of Continuous Time Recurrent Neural Networks (Benchmark Proposal) Musau and Johnson techniques that make use of satisfiability modulo theories [35], mixed integer programming [41], robustness testing [4], and linear programming [13] [37] have been able to deal with small networks, they are incapable of dealing with the complexity and scale of the majority of networks present in real-life applications [23]. Moreover, the majority of verification approaches have dealt only with feed-forward and convolutional neural network architectures. One class of neural networks that has received particularly little attention in the verification literature is the class of recurrent neural networks. Whereas both feed-forward networks and recurrent networks have demonstrated an ability to approximate continuous functions to any accuracy [16], recurrent neural networks have exhibited several advantages over their feed-forward counterparts [26]. By allowing for the presence of feedback connections in their architecture, recurrent neural networks are able to retain information about the past and capture a higher degree of sophisticated dynamics using fewer neurons than their feed-forward counterparts [5]. In fact, recurrent neural networks have demonstrated a higher level of success in solving problems in which there is a temporal relation between events [32] such as capturing the behavior of biological neurons [28], dynamical system identification [22], real time gesture recognition [3], robotics [6, 8, 27, 30] and speech recognition [1]. Therefore, they represent a more attractive framework than feed-forward networks in these domains [47]. However, due to the complexity exhibited by their architecture as well as the non-linear nature of their activation functions the verification approaches currently available in the research literature are incapable of being applied to these networks. Thus, there is an immediate need for methods and advanced software tools that can provide formal guarantees about their operation [23], particularly in the context of the system identification and the control of safety critical systems. In light of this shortcoming, the following paper presents two benchmark problems for the verification of a specific class of recurrent neural networks known as continuous-time recurrent neural networks (CTRNNs). Since the dynamics of CTRNNs can be expressed solely by a set non-linear ordinary differential equations (ODEs), the verification of such systems relies on an ability to reason about the reachable set from a set of initial conditions and inputs [39]. The two CTRNN benchmark problems we present are described as follows: the first is a network without inputs employed for the approximation of a fixed point attractor described in [46], and the second deals with a CTRNN used for the identification of a damped forced pendulum as described in [12]. The problems elucidated in the paper are modeled using Simulink/Stateflow (SLSF), Matlab scripts, and are available in the SpaceEx format1 [15]. We aim to provide a thorough problem description to which the numerous tools and approaches for non-linear systems present in the research community can be evaluated and compared [39]. This paper serves as a first step towards recurrent neural network verification. 2 General Mathematical Model for Continuous Time Recurrent Neural Networks The dynamics of a continuous-time recurrent neural network with n neurons is given by the following system of ordinary differential equations:

[1]  Minho Lee,et al.  Continuous Timescale Long-Short Term Memory Neural Network for Human Intent Understanding , 2017, Front. Neurorobot..

[2]  Yi Cao,et al.  Nonlinear system identification for predictive control using continuous time recurrent neural networks and automatic differentiation , 2008 .

[3]  Chandrasekhar Kambhampati,et al.  Approximation of non-autonomous dynamic systems by continuous time recurrent neural networks , 2000, Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challenges and Perspectives for the New Millennium.

[4]  Shahrokh Valaee,et al.  Recent Advances in Recurrent Neural Networks , 2017, ArXiv.

[5]  Mahesh Viswanathan,et al.  Automatic Reachability Analysis for Nonlinear Hybrid Models with C2E2 , 2016, CAV.

[6]  José Santos Reyes,et al.  Evolution of adaptive center-crossing continuous time recurrent neural networks for biped robot control , 2010, ESANN.

[7]  Razvan Pascanu,et al.  Understanding the exploding gradient problem , 2012, ArXiv.

[8]  Gerhard Tröster,et al.  Real time gesture recognition using continuous time recurrent neural networks , 2007, BODYNETS.

[9]  Taylor T. Johnson,et al.  Non-linear Continuous Systems for Safety Verification (Benchmark Proposal) , 2016 .

[10]  K. Warwick,et al.  Dynamic recurrent neural network for system identification and control , 1995 .

[11]  Antoine Girard,et al.  SpaceEx: Scalable Verification of Hybrid Systems , 2011, CAV.

[12]  Ashish Tiwari,et al.  Output Range Analysis for Deep Neural Networks , 2017, ArXiv.

[13]  Rüdiger Ehlers,et al.  Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks , 2017, ATVA.

[14]  Peter J. Gawthrop,et al.  Neural networks for control systems - A survey , 1992, Autom..

[15]  Gabriele M. T. D'Eleuterio,et al.  Synthesis of recurrent neural networks for dynamical system simulation , 2015, Neural Networks.

[16]  Michael Nikolaou,et al.  Dynamic process modeling with recurrent neural networks , 1993 .

[17]  Luan Viet Nguyen,et al.  Benchmark: A Nonlinear Reachability Analysis Test Set from Numerical Analysis , 2015, ARCH@CPSWeek.

[18]  Yuichi Nakamura,et al.  Approximation of dynamical systems by continuous time recurrent neural networks , 1993, Neural Networks.

[19]  Attractor Neural Networks , 1994, cond-mat/9412030.

[20]  E. Izquierdo-Torres On the Evolution of Continuous Time Recurrent Neural Networks with Neutrality , 2004 .

[21]  Joan Bruna,et al.  Intriguing properties of neural networks , 2013, ICLR.

[22]  Tommy W. S. Chow,et al.  Modeling of continuous time dynamical systems with input by recurrent neural networks , 2000 .

[23]  Pushmeet Kohli,et al.  Piecewise Linear Neural Network verification: A comparative study , 2017, ArXiv.

[24]  Chih-Hong Cheng,et al.  Neural networks for safety-critical applications — Challenges, experiments and perspectives , 2017, 2018 Design, Automation & Test in Europe Conference & Exhibition (DATE).

[25]  Randall D. Beer,et al.  A miniature hybrid robot propelled by legs , 2001, Proceedings 2001 IEEE/RSJ International Conference on Intelligent Robots and Systems. Expanding the Societal Role of Robotics in the the Next Millennium (Cat. No.01CH37180).

[26]  Luca Pulina,et al.  Challenging SMT solvers to verify neural networks , 2012, AI Commun..

[27]  Mykel J. Kochenderfer,et al.  Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks , 2017, CAV.

[28]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[29]  Rui Yan,et al.  Robot-to-human handover with obstacle avoidance via continuous time Recurrent Neural Network , 2016, 2016 IEEE Congress on Evolutionary Computation (CEC).

[30]  Barak A. Pearlmutter Gradient calculations for dynamic recurrent neural networks: a survey , 1995, IEEE Trans. Neural Networks.

[31]  M.O. Tade,et al.  Nonlinear model predictive control of a multistage evaporator system using recurrent neural networks , 2009, 2009 4th IEEE Conference on Industrial Electronics and Applications.

[32]  Bernd Becker,et al.  Towards Verification of Artificial Neural Networks , 2015, MBMV.

[33]  Tommy W. S. Chow,et al.  Approximation of dynamical time-variant systems by continuous-time recurrent neural networks , 2005, IEEE Transactions on Circuits and Systems II: Express Briefs.

[34]  Stefano Nolfi,et al.  Evolving robots able to integrate sensory-motor information over time , 2001, Theory in Biosciences.

[35]  Ashish Tiwari,et al.  Output Range Analysis for Deep Feedforward Neural Networks , 2018, NFM.

[36]  Xin Chen,et al.  Under-approximate flowpipes for non-linear continuous systems , 2014, 2014 Formal Methods in Computer-Aided Design (FMCAD).

[37]  O. De Jesus,et al.  A comparison of neural network control algorithms , 2001, IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222).

[38]  Christopher Kermorvant,et al.  Dropout Improves Recurrent Neural Networks for Handwriting Recognition , 2013, 2014 14th International Conference on Frontiers in Handwriting Recognition.

[39]  Kurt Hornik,et al.  Multilayer feedforward networks are universal approximators , 1989, Neural Networks.

[40]  Luca Pulina,et al.  Automated Verification of Neural Networks: Advances, Challenges and Perspectives , 2018, ArXiv.

[41]  Randall D. Beer,et al.  On the Dynamics of Small Continuous-Time Recurrent Neural Networks , 1995, Adapt. Behav..

[42]  Márcio Lobo Netto,et al.  Structural and Parametric Evolution of Continuous-Time Recurrent Neural Networks , 2008, 2008 10th Brazilian Symposium on Neural Networks.

[43]  Dave Cliff,et al.  Challenges in evolving controllers for physical robots , 1996, Robotics Auton. Syst..

[44]  John H. Hubbard,et al.  The Forced Damped Pendulum: Chaos, Complication and Control , 1999 .

[45]  Thomas Miconi Biologically plausible learning in recurrent neural networks reproduces neural dynamics observed during cognitive tasks , 2017 .

[46]  Antonio Criminisi,et al.  Measuring Neural Net Robustness with Constraints , 2016, NIPS.