Exploring Robustness of Neural Networks through Graph Measures

Motivated by graph theory, artificial neural networks (ANNs) are traditionally structured as layers of neurons (nodes), which learn useful information by the passage of data through interconnections (edges). In the machine learning realm, graph structures (i.e., neurons and connections) of ANNs have recently been explored using various graph-theoretic measures linked to their predictive performance. On the other hand, in network science (NetSci), certain graph measures including entropy and curvature are known to provide insight into the robustness and fragility of real-world networks. In this work, we use these graph measures to explore the robustness of various ANNs to adversarial attacks. To this end, we (1) explore the design space of inter-layer and intra-layers connectivity regimes of ANNs in the graph domain and record their predictive performance after training under different types of adversarial attacks, (2) use graph representations for both inter-layer and intra-layers connectivity regimes to calculate various graph-theoretic measures, including curvature and entropy, and (3) analyze the relationship between these graph measures and the adversarial performance of ANNs. We show that curvature and entropy, while operating in the graph domain, can quantify the robustness of ANNs without having to train these ANNs. Our results suggest that the realworld networks, including brain networks, financial networks, and social networks may provide important clues to the neural architecture search for robust ANNs. We propose a search strategy that efficiently finds robust ANNs amongst a set of well-performing ANNs without having a need to train all of these ANNs.

[1]  F ROSENBLATT,et al.  The perceptron: a probabilistic model for information storage and organization in the brain. , 1958, Psychological review.

[2]  Jure Leskovec,et al.  Graph Structure of Neural Networks , 2020, ICML.

[3]  Vinay Uday Prabhu,et al.  Deep Connectomics Networks: Neural Network Architectures Inspired by Neuronal Networks , 2019, ArXiv.

[4]  Nicolas Flammarion,et al.  Understanding and Improving Fast Adversarial Training , 2020, NeurIPS.

[5]  Rui Xu,et al.  When NAS Meets Robustness: In Search of Robust Architectures Against Adversarial Attacks , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[6]  Tryphon T. Georgiou,et al.  Robust Transport Over Networks , 2016, IEEE Transactions on Automatic Control.

[7]  Y. Ollivier Ricci curvature of metric spaces , 2007 .

[8]  P. Erdos,et al.  On the evolution of random graphs , 1984 .

[9]  Dumitru Erhan,et al.  Going deeper with convolutions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[10]  Richard G. Baraniuk,et al.  Brain-inspired Robust Vision using Convolutional Neural Networks with Feedback , 2019 .

[11]  Seyed-Mohsen Moosavi-Dezfooli,et al.  DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[12]  Jian Sun,et al.  Identity Mappings in Deep Residual Networks , 2016, ECCV.

[13]  Samy Bengio,et al.  Adversarial Machine Learning at Scale , 2016, ICLR.

[14]  Ed Reznik,et al.  Graph Curvature for Differentiating Cancer Networks , 2015, Scientific Reports.

[15]  Olaf Sporns,et al.  Graph Theory Methods for the Analysis of Neural Connectivity Patterns , 2003 .

[16]  Martin Schrimpf,et al.  Simulating a Primary Visual Cortex at the Front of CNNs Improves Robustness to Image Perturbations , 2020, bioRxiv.

[17]  A. Tannenbaum,et al.  Ricci curvature: An economic indicator for market fragility and systemic risk , 2016, Science Advances.

[18]  Vijay Vasudevan,et al.  Learning Transferable Architectures for Scalable Image Recognition , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[19]  J. Zico Kolter,et al.  Fast is better than free: Revisiting adversarial training , 2020, ICLR.

[20]  Alex Krizhevsky,et al.  Learning Multiple Layers of Features from Tiny Images , 2009 .

[21]  Pushmeet Kohli,et al.  Adversarial Risk and the Dangers of Evaluating Against Weak Attacks , 2018, ICML.

[22]  Bruno Cessac,et al.  Effects of Hebbian learning on the dynamics and structure of random networks with inhibitory and excitatory neurons , 2007, Journal of Physiology-Paris.

[23]  Aleksander Madry,et al.  Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.

[24]  Xiang Zhang,et al.  OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks , 2013, ICLR.

[25]  Hava T. Siegelmann,et al.  Brain-inspired replay for continual learning with artificial neural networks , 2020, Nature Communications.

[26]  François Chollet,et al.  Xception: Deep Learning with Depthwise Separable Convolutions , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[27]  Christophe Lenglet,et al.  Network curvature as a hallmark of brain structural connectivity , 2017, Nature Communications.

[28]  Frederic Sala,et al.  Learning Mixed-Curvature Representations in Product Spaces , 2018, ICLR.

[29]  Kaiming He,et al.  Exploring Randomly Wired Neural Networks for Image Recognition , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[30]  C. Sander,et al.  Ricci Curvature and Robustness of Cancer Networks , 2015 .

[31]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[32]  Larry S. Davis,et al.  Universal Adversarial Training , 2018, AAAI.

[33]  David A. Wagner,et al.  Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.

[34]  Fei-Fei Li,et al.  ImageNet: A large-scale hierarchical image database , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.

[35]  Surya Ganguli,et al.  Biologically inspired protection of deep networks from adversarial attacks , 2017, ArXiv.

[36]  Aaron Klein,et al.  NAS-Bench-101: Towards Reproducible Neural Architecture Search , 2019, ICML.

[37]  Ulrik Brandes,et al.  What is network science? , 2013, Network Science.

[38]  Albert-László Barabási,et al.  Statistical mechanics of complex networks , 2001, ArXiv.

[39]  Larry S. Davis,et al.  Adversarial Training for Free! , 2019, NeurIPS.

[40]  Quoc V. Le,et al.  Neural Architecture Search with Reinforcement Learning , 2016, ICLR.

[41]  N. Kasabov,et al.  Brain-inspired spiking neural networks for decoding and understanding muscle activity and kinematics from electroencephalography signals during hand movements , 2021, Scientific Reports.

[42]  Duncan J. Watts,et al.  Collective dynamics of ‘small-world’ networks , 1998, Nature.

[43]  David A. Wagner,et al.  Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).

[44]  Yoshua Bengio,et al.  Gradient-based learning applied to document recognition , 1998, Proc. IEEE.

[45]  Naoki Masuda,et al.  Clustering Coefficients for Correlation Networks , 2018, Front. Neuroinform..

[46]  Andrew Zisserman,et al.  Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.

[47]  Nikola Kasabov,et al.  Time-Space, Spiking Neural Networks and Brain-Inspired Artificial Intelligence , 2018, Springer Series on Bio- and Neurosystems.

[48]  Danielle Smith Bassett,et al.  Small-World Brain Networks , 2006, The Neuroscientist : a review journal bringing neurobiology, neurology and psychiatry.

[49]  Lloyd Demetrius,et al.  Boltzmann, Darwin and Directionality theory , 2013 .

[50]  Jonathon Shlens,et al.  Explaining and Harnessing Adversarial Examples , 2014, ICLR.

[51]  Paul Bogdan,et al.  Ollivier-Ricci Curvature-Based Method to Community Detection in Complex Networks , 2019, Scientific Reports.

[52]  Rob Fergus,et al.  Visualizing and Understanding Convolutional Networks , 2013, ECCV.

[53]  Been Kim,et al.  Sanity Checks for Saliency Maps , 2018, NeurIPS.

[54]  C. Lenglet,et al.  Robustness of Brain Structural Networks Is Affected in Cognitively Impaired MS Patients , 2020, Frontiers in Neurology.

[55]  Aleksander Madry,et al.  On Evaluating Adversarial Robustness , 2019, ArXiv.

[56]  S. Yau,et al.  Ricci curvature of graphs , 2011 .

[57]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[58]  Andrzej Banburski,et al.  Biologically Inspired Mechanisms for Adversarial Robustness , 2020, NeurIPS.

[59]  Y. Ollivier Ricci curvature of Markov chains on metric spaces , 2007, math/0701886.

[60]  R. Sun Artificial Intelligence: Connectionist and Symbolic Approaches , 1999 .