Laplacian Power Networks: Bounding Indicator Function Smoothness for Adversarial Defense

Deep Neural Networks often suffer from lack of robustness to adversarial noise. To mitigate this drawback, authors have proposed different approaches, such as adding regularizers or training using adversarial examples. In this paper we propose a new regularizer built upon the Laplacian of similarity graphs obtained from the representation of training data at each intermediate representation. This regularizer penalizes large changes (across consecutive layers in the architecture) in the distance between examples of different classes. We provide theoretical justification for this regularizer and demonstrate its effectiveness when facing adversarial noise on classical supervised learning vision datasets.

[1]  Seyed-Mohsen Moosavi-Dezfooli,et al.  DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[2]  Jelena Kovacevic,et al.  An Introduction to Frames , 2008, Found. Trends Signal Process..

[3]  Pierre Vandergheynst,et al.  Geometric Deep Learning: Going beyond Euclidean data , 2016, IEEE Signal Process. Mag..

[4]  Seyed-Mohsen Moosavi-Dezfooli,et al.  Universal Adversarial Perturbations , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[5]  George Kurian,et al.  Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation , 2016, ArXiv.

[6]  Ananthram Swami,et al.  Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).

[7]  Patrick D. McDaniel,et al.  Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust Deep Learning , 2018, ArXiv.

[8]  Vincent Gripon,et al.  An Inside Look at Deep Neural Networks Using Graph Signal Processing , 2018, 2018 Information Theory and Applications Workshop (ITA).

[9]  Joan Bruna,et al.  Intriguing properties of neural networks , 2013, ICLR.

[10]  Pascal Frossard,et al.  The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains , 2012, IEEE Signal Processing Magazine.

[11]  Luca Rigazio,et al.  Towards Deep Neural Network Architectures Robust to Adversarial Examples , 2014, ICLR.

[12]  Samy Bengio,et al.  Adversarial Machine Learning at Scale , 2016, ICLR.

[13]  A. Salman Avestimehr,et al.  A Sampling Theory Perspective of Graph-Based Semi-Supervised Learning , 2017, IEEE Transactions on Information Theory.

[14]  Alex Krizhevsky,et al.  Learning Multiple Layers of Features from Tiny Images , 2009 .

[15]  Samy Bengio,et al.  Adversarial examples in the physical world , 2016, ICLR.

[16]  Patrick D. McDaniel,et al.  Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples , 2016, ArXiv.

[17]  Jonathon Shlens,et al.  Explaining and Harnessing Adversarial Examples , 2014, ICLR.

[18]  Moustapha Cissé,et al.  Parseval Networks: Improving Robustness to Adversarial Examples , 2017, ICML.

[19]  Jian Sun,et al.  Identity Mappings in Deep Residual Networks , 2016, ECCV.