Lipschitz Certificates for Layered Network Structures Driven by Averaged Activation Operators

Obtaining sharp Lipschitz constants for feed-forward neural networks is essential to assess their robustness in the face of perturbations of their inputs. We derive such constants in the context of...

[1]  Erol Egrioglu,et al.  Robust multilayer neural network based on median neuron model , 2012, Neural Computing and Applications.

[2]  R. Jackson Inequalities , 2007, Algebra for Parents.

[3]  Patrick L. Combettes,et al.  Quasi-Nonexpansive Iterations on the Affine Hull of Orbits: From Mann's Mean Value Algorithm to Inertial Methods , 2017, SIAM J. Optim..

[4]  Qiyang Zhao,et al.  Suppressing the Unusual: towards Robust CNNs using Symmetric Activation Functions , 2016, ArXiv.

[5]  Jacek M. Zurada,et al.  Learning Understandable Neural Networks With Nonnegative Weight Constraints , 2015, IEEE Transactions on Neural Networks and Learning Systems.

[6]  Stéphane Mallat,et al.  Invariant Scattering Convolution Networks , 2012, IEEE transactions on pattern analysis and machine intelligence.

[7]  David Wagner,et al.  Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods , 2017, AISec@CCS.

[8]  Jean Ponce,et al.  A Theoretical Analysis of Feature Pooling in Visual Recognition , 2010, ICML.

[9]  Yoshua Bengio,et al.  Maxout Networks , 2013, ICML.

[10]  A. Krizhevsky Convolutional Deep Belief Networks on CIFAR-10 , 2010 .

[11]  Heinz H. Bauschke,et al.  Convex Analysis and Monotone Operator Theory in Hilbert Spaces , 2011, CMS Books in Mathematics.

[12]  Joan Bruna,et al.  Intriguing properties of neural networks , 2013, ICLR.

[13]  Walaa M. Moursi,et al.  The Forward–Backward Algorithm and the Normal Problem , 2016, J. Optim. Theory Appl..

[14]  Sepp Hochreiter,et al.  Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) , 2015, ICLR.

[15]  Matthias Hein,et al.  Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation , 2017, NIPS.

[16]  Laurent Condat,et al.  A Primal–Dual Splitting Method for Convex Optimization Involving Lipschitzian, Proximable and Linear Composite Terms , 2012, Journal of Optimization Theory and Applications.

[17]  Bernd Eggers,et al.  Nonlinear Functional Analysis And Its Applications , 2016 .

[18]  Seyoon Ko,et al.  Easily Parallelizable and Distributable Class of Algorithms for Structured Sparsity, with Optimal Acceleration , 2017, Journal of Computational and Graphical Statistics.

[19]  Matus Telgarsky,et al.  Spectrally-normalized margin bounds for neural networks , 2017, NIPS.

[20]  Maneesh Kumar Singh,et al.  Lipschitz Properties for Deep Convolutional Networks , 2017, ArXiv.

[21]  P. L. Combettes,et al.  Solving monotone inclusions via compositions of nonexpansive averaged operators , 2004 .

[22]  J. Zico Kolter,et al.  Provable defenses against adversarial examples via the convex outer adversarial polytope , 2017, ICML.

[23]  F ROSENBLATT,et al.  The perceptron: a probabilistic model for information storage and organization in the brain. , 1958, Psychological review.

[24]  F. Facchinei,et al.  Finite-Dimensional Variational Inequalities and Complementarity Problems , 2003 .

[25]  Kevin Scaman,et al.  Lipschitz regularity of deep neural networks: analysis and efficient estimation , 2018, NeurIPS.

[26]  Masashi Sugiyama,et al.  Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks , 2018, NeurIPS.

[27]  Ernö Robert Csetnek,et al.  A Dynamical System Associated with the Fixed Points Set of a Nonexpansive Operator , 2014, 1411.4442.

[28]  Masahiro Nakagawa,et al.  An Artificial Neuron Model with a Periodic Activation Function , 1995 .

[29]  Helmut Bölcskei,et al.  Optimal Approximation with Sparsely Connected Deep Neural Networks , 2017, SIAM J. Math. Data Sci..

[30]  Patrick L. Combettes,et al.  Deep Neural Network Structures Solving Variational Inequalities , 2018, Set-Valued and Variational Analysis.

[31]  Lacra Pavel,et al.  Distributed Generalized Nash Equilibria Computation of Monotone Games via Double-Layer Preconditioned Proximal-Point Algorithms , 2019, IEEE Transactions on Control of Network Systems.

[32]  P. L. Combettes,et al.  Monotone Operator Methods for Nash Equilibria in Non-potential Games , 2011, 1106.0144.

[33]  Brendt Wohlberg,et al.  An Online Plug-and-Play Algorithm for Regularized Image Reconstruction , 2018, IEEE Transactions on Computational Imaging.

[34]  Peter M. Roth,et al.  The Quest for the Golden Activation Function , 2018, ArXiv.

[35]  Ananthram Swami,et al.  Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).

[36]  Patrick L. Combettes,et al.  Signal Recovery by Proximal Forward-Backward Splitting , 2005, Multiscale Model. Simul..

[37]  Aleksander Madry,et al.  Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.

[38]  Jonathon Shlens,et al.  Explaining and Harnessing Adversarial Examples , 2014, ICLR.

[39]  Quoc V. Le,et al.  Searching for Activation Functions , 2018, arXiv.

[40]  I. Yamada,et al.  Nonexpansiveness of a linearized augmented Lagrangian operator for hierarchical convex optimization , 2017 .

[41]  Geoffrey E. Hinton,et al.  Dynamic Routing Between Capsules , 2017, NIPS.

[42]  Zhigang Zeng,et al.  Multistability analysis of a general class of recurrent neural networks with non-monotonic activation functions and time-varying delays , 2016, Neural Networks.

[43]  Zhuowen Tu,et al.  Generalizing Pooling Functions in Convolutional Neural Networks: Mixed, Gated, and Tree , 2015, AISTATS.

[44]  Aditi Raghunathan,et al.  Certified Defenses against Adversarial Examples , 2018, ICLR.

[45]  Ilker Bayram,et al.  On the Convergence of the Iterative Shrinkage/Thresholding Algorithm With a Weakly Convex Penalty , 2015, IEEE Transactions on Signal Processing.

[46]  Juan Peypouquet,et al.  Backward–forward algorithms for structured monotone inclusions in Hilbert spaces☆ , 2018 .

[47]  Moustapha Cissé,et al.  Fooling End-To-End Speaker Verification With Adversarial Examples , 2018, 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[48]  Jonathan M. Borwein,et al.  Convergence Rate Analysis for Averaged Fixed Point Iterations in Common Fixed Point Problems , 2015, SIAM J. Optim..

[49]  Roberto Cominetti,et al.  Sharp convergence rates for averaged nonexpansive maps , 2016, Israel Journal of Mathematics.

[50]  Jean-Christophe Pesquet,et al.  Deep unfolding of a proximal interior point method for image restoration , 2018, Inverse Problems.

[51]  Guillermo Sapiro,et al.  Robust Large Margin Deep Neural Networks , 2016, IEEE Transactions on Signal Processing.

[52]  Ajmal Mian,et al.  Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey , 2018, IEEE Access.

[53]  Andrew L. Maas Rectifier Nonlinearities Improve Neural Network Acoustic Models , 2013 .

[54]  Xiaowei Huang,et al.  Reachability Analysis of Deep Neural Networks with Provable Guarantees , 2018, IJCAI.

[55]  Patrick L. Combettes,et al.  Proximal Thresholding Algorithm for Minimization over Orthonormal Bases , 2007, SIAM J. Optim..

[56]  Charles A. Micchelli,et al.  How to Choose an Activation Function , 1993, NIPS.

[57]  David A. Wagner,et al.  Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.

[58]  Yoshua Bengio,et al.  Deep Sparse Rectifier Neural Networks , 2011, AISTATS.