Robustness Certificates for Implicit Neural Networks: A Mixed Monotone Contractive Approach

Implicit neural networks are a general class of learning models that replace the layers in traditional feedforward models with implicit algebraic equations. Compared to traditional learning models, implicit networks offer competitive performance and reduced memory consumption. However, they can remain brittle with respect to input adversarial perturbations. This paper proposes a theoretical and computational framework for robustness verification of implicit neural networks; our framework blends together mixed monotone systems theory and contraction theory. First, given an implicit neural network, we introduce a related embedded network and show that, given an `∞-norm box constraint on the input, the embedded network provides an `∞-norm box overapproximation for the output of the given network. Second, using `∞-matrix measures, we propose sufficient conditions for well-posedness of both the original and embedded system and design an iterative algorithm to compute the `∞-norm box robustness margins for reachability and classification problems. Third, of independent value, we propose a novel relative classifier variable that leads to tighter bounds on the certified adversarial robustness in classification problems. Finally, we perform numerical simulations on a Non-Euclidean Monotone Operator Network (NEMON) trained on the MNIST dataset. In these simulations, we compare the accuracy and run time of our mixed monotone contractive approach with the existing robustness verification approaches in the literature for estimating the certified adversarial robustness.

[1]  S. Rinaldi,et al.  Positive Linear Systems: Theory and Applications , 2000 .

[2]  Aleksander Madry,et al.  Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.

[3]  Francesco Bullo,et al.  Robust Implicit Networks via Non-Euclidean Contractions , 2021, NeurIPS.

[4]  Kevin Scaman,et al.  Lipschitz regularity of deep neural networks: analysis and efficient estimation , 2018, NeurIPS.

[5]  Joan Bruna,et al.  Intriguing properties of neural networks , 2013, ICLR.

[6]  Cho-Jui Hsieh,et al.  Towards Stable and Efficient Training of Verifiably Robust Neural Networks , 2019, ICLR.

[7]  Necmiye Ozay,et al.  Tight decomposition functions for mixed monotonicity , 2019, 2019 IEEE 58th Conference on Decision and Control (CDC).

[8]  Ian R. Manchester,et al.  Lipschitz Bounded Equilibrium Networks , 2020, ArXiv.

[9]  Francesco Bullo,et al.  From Contraction Theory to Fixed Point Algorithms on Riemannian and Non-Euclidean Spaces , 2021, ArXiv.

[10]  Samuel Coogan,et al.  Improving the Fidelity of Mixed-Monotone Reachable Set Approximations via State Transformations , 2020, 2021 American Control Conference (ACC).

[11]  Samuel Coogan,et al.  Computing Robustly Forward Invariant Sets for Mixed-Monotone Systems , 2020, 2020 59th IEEE Conference on Decision and Control (CDC).

[12]  J. Zico Kolter,et al.  Monotone operator equilibrium networks , 2020, NeurIPS.

[13]  Samuel Coogan,et al.  Mixed Monotonicity for Reachability and Safety in Dynamical Systems , 2020, 2020 59th IEEE Conference on Decision and Control (CDC).

[14]  Laurent El Ghaoui,et al.  Implicit Deep Learning , 2019, SIAM J. Math. Data Sci..

[15]  J. Zico Kolter,et al.  Estimating Lipschitz constants of monotone deep equilibrium models , 2021, ICLR.

[16]  Hal L. Smith,et al.  Monotone Dynamical Systems: An Introduction To The Theory Of Competitive And Cooperative Systems (Mathematical Surveys And Monographs) By Hal L. Smith , 1995 .

[17]  Samuel Coogan,et al.  Tight Decomposition Functions for Continuous-Time Mixed-Monotone Systems With Disturbances , 2021, IEEE Control Systems Letters.

[18]  Patrick L. Combettes,et al.  Lipschitz Certificates for Layered Network Structures Driven by Averaged Activation Operators , 2019, SIAM J. Math. Data Sci..

[19]  Matthew Mirman,et al.  Differentiable Abstract Interpretation for Provably Robust Neural Networks , 2018, ICML.

[20]  J. Zico Kolter,et al.  Certified Adversarial Robustness via Randomized Smoothing , 2019, ICML.

[21]  Cho-Jui Hsieh,et al.  Efficient Neural Network Robustness Certification with General Activation Functions , 2018, NeurIPS.

[22]  Logan Engstrom,et al.  Synthesizing Robust Adversarial Examples , 2017, ICML.

[23]  Victor Magron,et al.  Semialgebraic Representation of Monotone Deep Equilibrium Models and Applications to Certification , 2021, NeurIPS.

[24]  David Wagner,et al.  Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods , 2017, AISec@CCS.

[25]  Timothy A. Mann,et al.  On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models , 2018, ArXiv.

[26]  David Angeli,et al.  Monotone control systems , 2003, IEEE Trans. Autom. Control..

[27]  E. D. Sontagc,et al.  Nonmonotone systems decomposable into monotone systems with negative feedback , 2005 .

[28]  J. Zico Kolter,et al.  Provable defenses against adversarial examples via the convex outer adversarial polytope , 2017, ICML.

[29]  Manfred Morari,et al.  Efficient and Accurate Estimation of Lipschitz Constants for Deep Neural Networks , 2019, NeurIPS.

[30]  Jonathon Shlens,et al.  Explaining and Harnessing Adversarial Examples , 2014, ICLR.

[31]  Murat Arcak,et al.  Efficient finite abstraction of mixed monotone systems , 2015, HSCC.

[32]  Venkatesh Saligrama,et al.  RNNs Incrementally Evolving on an Equilibrium Manifold: A Panacea for Vanishing and Exploding Gradients? , 2019, ICLR.

[33]  David Angeli,et al.  A small-gain result for orthant-monotone systems under mixed feedback , 2014, Syst. Control. Lett..

[34]  Vladlen Koltun,et al.  Deep Equilibrium Models , 2019, NeurIPS.

[35]  Ananthram Swami,et al.  Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).

[36]  Francesco Bullo,et al.  Non-Euclidean Contraction Theory for Robust Nonlinear Stability , 2021 .

[37]  Avrim Blum,et al.  Random Smoothing Might be Unable to Certify 𝓁∞ Robustness for High-Dimensional Images , 2020, J. Mach. Learn. Res..