This benchmark suite presents a detailed description of a series of closed-loop control systems with artificial neural network controllers. In many applications, feed-forward neural networks are heavily involved in the implementation of controllers by learning and representing control laws through several methods such as model predictive control (MPC) and reinforcement learning (RL). The type of networks that we consider in this manuscript are feed-forward neural networks consisting of multiple hidden layers with ReLU activation functions and a linear activation function in the output layer. While neural network controllers have been able to achieve desirable performance in many contexts, they also present a unique challenge in that it is difficult to provide any guarantees about the correctness of their behavior or reason about the stability a system that employs their use. Thus, from a controls perspective, it is necessary to verify them in conjunction with their corresponding plants in closed-loop. While there have been a handful of works proposed towards the verification of closed-loop systems with feed-forward neural network controllers, this area still lacks attention and a unified set of benchmark examples on which verification techniques can be evaluated and compared. Thus, to this end, we present a range of closed-loop control systems ranging from two to six state variables, and a range of controllers with sizes in the range of eleven neurons to a few hundred neurons in more complex systems. Category: Academic Difficulty: High Acknowledgement The material presented in this paper is based upon work supported by the National Science Foundation (NSF) under grant number SHF 1736323, the Air Force Office of Scientific Research (AFOSR) through contract numbers FA9550-15-1-0258, FA9550-16-10246, and FA9550-18-1-0122, and the Defense Advanced Research Projects Agency (DARPA) through contract number FA8750-18-C-0089. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of AFOSR, DARPA, or NSF G. Frehse and M. Althoff (eds.), ARCH19 (EPiC Series in Computing, vol. 61), pp. 201–210 Closed-loop Systems with Neural Network Controllers Manzanas Lopez, Musau, Tran and Johnson 1 Context and Origins. In recent years, advances in Artficial Intelligence (AI) have enabled a diverse range of technologies that are directly impacting people’s everyday lives [16]. Particularly, within this space, machine learning methods such as Deep Learning (DL) have achieved levels of accuracy and performance that are competitive or better than humans for tasks such as pattern and image recognition [12], natural language processing [7], and knowledge representation and reasoning [15,22]. Despite these achievements, there have been reservations about incorporating them into safety critical systems [11] due to their susceptibility to unexpected and errant behavior from a slight perturbation in their inputs [18]. Furthermore, neural networks are often viewed as "black boxes" since the underlying operation of the neuron activations is often indiscernible [22]. In light of these challenges, there has been significant work towards the creation of methods and verification tools that can formally reason about the behavior of neural networks [22]. However, the vast majority of these techniques have only been able to deal with feed-forward neural networks with piecewise-linear activation functions [4]. Additionally, the bulk of these methods have primarily considered the verification of input-output properties of neural networks in isolation [22], and there are only a handful of works that have explicitly addressed the verification of closed-loop control systems with neural network controllers [5, 8, 19–21]. One of the central challenges in verifying neural network control systems is that applying existing methodology to these systems is not straightforward [9], and a simple combination of verification tools for non-linear ordinary differential equations along with a neural network reachability tool suffers from severe overestimation errors [5]. Still, the verification of closed loop neural network systems is deeply important as they naturally arise in safety critical systems [5] such as autonomous vehicles, and complex control systems that make use of model predictive control and reinforcement learning [16]. Thus, there is a compelling need for methods and advanced software tools that can effectively deal with the complexities exhibited by these systems [5]. Inspired by a shortage of verification methods for closed-loop neural network control systems in the research literature, the central contribution of this paper is the provision of a set of executable benchmarks that have been synthesized using methods such as reinforcement learning [17], and model predictive control [14]. The problems elucidated in the paper are modeled using Simulink/Stateflow (SLSF) and are available at the following github repository1. We aim to provide a thorough problem description to which the numerous tools and approaches for non-linear systems and neural network verification present in the research community can be evaluated and compared [22]. If the research community is able to devise acceptable solutions to the aformentioned challenges they will stimulate the development of robust and intelligent systems with the potential to bring unparalleled benefits to numerous application domains. 2 Description of benchmarks. In this manuscript, we present a set of linear and non-linear closed-loop systems with continuoustime plants and feedforward neural networks controllers trained using different controls schemes such as reinforcement learning or model predictive control (MPC). A typical architecture describing the structure of these systems is displayed in Figure 2.1. All the neural networks 1https://github.com/verivital/ARCH-2019
[1]
Asifullah Khan,et al.
A survey of the recent architectures of deep convolutional neural networks
,
2019,
Artificial Intelligence Review.
[2]
Weiming Xiang,et al.
Reachability Analysis and Safety Verification for Neural Network Control Systems
,
2018,
ArXiv.
[3]
Weiming Xiang,et al.
Verification for Machine Learning, Autonomy, and Neural Networks Survey
,
2018,
ArXiv.
[4]
Demis Hassabis,et al.
Mastering the game of Go with deep neural networks and tree search
,
2016,
Nature.
[5]
Insup Lee,et al.
Verisig: verifying safety properties of hybrid systems with neural network controllers
,
2018,
HSCC.
[6]
Richard S. Sutton,et al.
Introduction to Reinforcement Learning
,
1998
.
[7]
Mykel J. Kochenderfer,et al.
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
,
2017,
CAV.
[8]
Richard S. Sutton,et al.
Neuronlike adaptive elements that can solve difficult learning control problems
,
1983,
IEEE Transactions on Systems, Man, and Cybernetics.
[9]
Oren Etzioni,et al.
Artificial intelligence and life in 2030: the one hundred year study on artificial intelligence
,
2016
.
[10]
Pushmeet Kohli,et al.
Piecewise Linear Neural Network verification: A comparative study
,
2017,
ArXiv.
[11]
Antoine Girard,et al.
SpaceEx: Scalable Verification of Hybrid Systems
,
2011,
CAV.
[12]
Joan Bruna,et al.
Intriguing properties of neural networks
,
2013,
ICLR.
[13]
Yoav Goldberg,et al.
A Primer on Neural Network Models for Natural Language Processing
,
2015,
J. Artif. Intell. Res..
[14]
Sergiy Bogomolov,et al.
HYST: a source transformation and translation tool for hybrid automaton models
,
2015,
HSCC.
[15]
James Kapinski,et al.
INVITED: Reasoning about Safety of Learning-Enabled Components in Autonomous Cyber-physical Systems
,
2018,
2018 55th ACM/ESDA/IEEE Design Automation Conference (DAC).
[16]
S. Joe Qin,et al.
An Overview of Nonlinear Model Predictive Control Applications
,
2000
.
[17]
Sriram Sankaranarayanan,et al.
Reachability analysis for neural feedback systems using regressive polynomial rule inference
,
2019,
HSCC.