Certifying Incremental Quadratic Constraints for Neural Networks via Convex Optimization

Abstracting neural networks with constraints they impose on their inputs and outputs can be very useful in the analysis of neural network classifiers and to derive optimization-based algorithms for certification of stability and robustness of feedback systems involving neural networks. In this paper, we propose a convex program, in the form of a Linear Matrix Inequality (LMI), to certify incremental quadratic constraints on the map of neural networks over a region of interest. These certificates can capture several useful properties such as (local) Lipschitz continuity, onesided Lipschitz continuity, invertibility, and contraction. We illustrate the utility of our approach in two different settings. First, we develop a semidefinite program to compute guaranteed and sharp upper bounds on the local Lipschitz constant of neural networks and illustrate the results on random networks as well as networks trained on MNIST. Second, we consider a linear time-invariant system in feedback with an approximate model predictive controller parameterized by a neural network. We then turn the stability analysis into a semidefinite feasibility program and estimate an ellipsoidal invariant set for the closed-loop system.

[1]  Kevin Scaman,et al.  Lipschitz regularity of deep neural networks: analysis and efficient estimation , 2018, NeurIPS.

[2]  George J. Pappas,et al.  Reach-SDP: Reachability Analysis of Closed-Loop Systems with Neural Network Controllers via Semidefinite Programming , 2020, 2020 59th IEEE Conference on Decision and Control (CDC).

[3]  Manfred Morari,et al.  Safety Verification and Robustness Analysis of Neural Networks via Quadratic Constraints and Semidefinite Programming , 2019, ArXiv.

[4]  Nikolai Matni,et al.  Robust Guarantees for Perception-Based Control , 2019, L4DC.

[5]  Javad Lavaei,et al.  Stability-Certified Reinforcement Learning: A Control-Theoretic Perspective , 2018, IEEE Access.

[6]  Nathan Srebro,et al.  Exploring Generalization in Deep Learning , 2017, NIPS.

[7]  V. Yakubovich Nonconvex optimization problem: the infinite-horizon linear-quadratic control problem with quadratic constraints , 1992 .

[8]  Jinfeng Yi,et al.  Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach , 2018, ICLR.

[9]  Kristi A. Morgansen,et al.  Analytical bounds on the local Lipschitz constants of affine-ReLU functions , 2020, ArXiv.

[10]  Yvan Saeys,et al.  Lower bounds on the robustness to adversarial perturbations , 2017, NIPS.

[11]  Seyed-Mohsen Moosavi-Dezfooli,et al.  Robustness of classifiers: from adversarial to random noise , 2016, NIPS.

[12]  Murat Arcak,et al.  Stability Analysis using Quadratic Constraints for Systems with Neural Network Controllers , 2020, ArXiv.

[13]  W. Haddad,et al.  Nonlinear Dynamical Systems and Control: A Lyapunov-Based Approach , 2008 .

[14]  A. Rantzer,et al.  System analysis via integral quadratic constraints , 1997, IEEE Trans. Autom. Control..

[15]  Manfred Morari,et al.  Efficient and Accurate Estimation of Lipschitz Constants for Deep Neural Networks , 2019, NeurIPS.

[16]  Inderjit S. Dhillon,et al.  Towards Fast Computation of Certified Robustness for ReLU Networks , 2018, ICML.

[17]  G. Zames On the input-output stability of time-varying nonlinear feedback systems Part one: Conditions derived using concepts of loop gain, conicity, and positivity , 1966 .

[18]  J. Zico Kolter,et al.  Provable defenses against adversarial examples via the convex outer adversarial polytope , 2017, ICML.

[19]  Alejandro Ribeiro,et al.  Analysis of Optimization Algorithms via Integral Quadratic Constraints: Nonstrongly Convex Problems , 2017, SIAM J. Optim..

[20]  Paul Rolland,et al.  Lipschitz constant estimation of Neural Networks via sparse polynomial optimization , 2020, ICLR.

[21]  Helmut Bölcskei,et al.  Optimal Approximation with Sparsely Connected Deep Neural Networks , 2017, SIAM J. Math. Data Sci..

[22]  Aditi Raghunathan,et al.  Semidefinite relaxations for certifying robustness to adversarial examples , 2018, NeurIPS.

[23]  E. Yaz Linear Matrix Inequalities In System And Control Theory , 1998, Proceedings of the IEEE.

[24]  Matus Telgarsky,et al.  Spectrally-normalized margin bounds for neural networks , 2017, NIPS.

[25]  Victor Magron,et al.  Semialgebraic Optimization for Lipschitz Constants of ReLU Networks , 2020, NeurIPS.

[26]  Behçet Açikmese,et al.  Observers for systems with nonlinearities satisfying incremental quadratic constraints , 2011, Autom..

[27]  Guillermo Sapiro,et al.  Robust Large Margin Deep Neural Networks , 2016, IEEE Transactions on Signal Processing.

[28]  Alexandros G. Dimakis,et al.  Exactly Computing the Local Lipschitz Constant of ReLU Networks , 2020, NeurIPS.