Node Feature Kernels Increase Graph Convolutional Network Robustness

The robustness of the much used Graph Convolutional Networks (GCNs) to perturbations of their input is becoming a topic of increasing importance. In this paper the random GCN is introduced for which a random matrix theory analysis is possible. This analysis suggests that if the graph is sufficiently perturbed, or in the extreme case random, then the GCN fails to benefit from the node features. It is furthermore observed that enhancing the message passing step in GCNs by adding the node feature kernel to the adjacency matrix of the graph structure solves this problem. An empirical study of a GCN utilised for node classification on six real datasets further confirms the theoretical findings and demonstrates that perturbations of the graph structure can result in GCNs performing significantly worse than Multi-Layer Perceptrons run on the node features alone. In practice, adding a node feature kernel to the message passing of perturbed graphs results in a significant improvement of the GCN’s performance, thereby rendering it more robust to graph perturbations. Our code is publicly available at: https://github.com/ChangminWu/RobustGCN.

[1]  Noureddine El Karoui,et al.  The spectrum of kernel random matrices , 2010, 1001.0492.

[2]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[3]  Tapani Raiko,et al.  International Conference on Learning Representations (ICLR) , 2016 .

[4]  Ludovic Dos Santos,et al.  Coloring graph neural networks for node disambiguation , 2019, IJCAI.

[5]  Lise Getoor,et al.  Collective Classification in Network Data , 2008, AI Mag..

[6]  Romain Couillet,et al.  Random Matrix Theory Proves that Deep Learning Representations of GAN-data Behave as Gaussian Mixtures , 2020, ICML.

[7]  Romain Couillet,et al.  Concentration of Measure and Large Random Matrices with an application to Sample Covariance Matrices , 2018, 1805.08295.

[8]  Stephan Gunnemann,et al.  Adversarial Attacks on Graph Neural Networks via Meta Learning , 2019, ICLR.

[9]  Max Welling,et al.  Semi-Supervised Classification with Graph Convolutional Networks , 2016, ICLR.

[10]  Ruslan Salakhutdinov,et al.  Revisiting Semi-Supervised Learning with Graph Embeddings , 2016, ICML.

[11]  Pietro Liò,et al.  Principal Neighbourhood Aggregation for Graph Nets , 2020, NeurIPS.

[12]  Mark E. J. Newman,et al.  Stochastic blockmodels and community structure in networks , 2010, Physical review. E, Statistical, nonlinear, and soft matter physics.

[13]  Stephan Günnemann,et al.  Pitfalls of Graph Neural Network Evaluation , 2018, ArXiv.

[14]  M. Ledoux The concentration of measure phenomenon , 2001 .

[15]  Samuel S. Schoenholz,et al.  Neural Message Passing for Quantum Chemistry , 2017, ICML.

[16]  Jure Leskovec,et al.  Inductive Representation Learning on Large Graphs , 2017, NIPS.

[17]  Zhenyu Liao,et al.  A Random Matrix Approach to Neural Networks , 2017, ArXiv.

[18]  Jan Eric Lenssen,et al.  Fast Graph Representation Learning with PyTorch Geometric , 2019, ArXiv.

[19]  Romain Couillet,et al.  Performance-Complexity Trade-Off in Large Dimensional Statistics , 2020, 2020 IEEE 30th International Workshop on Machine Learning for Signal Processing (MLSP).

[20]  Jure Leskovec,et al.  How Powerful are Graph Neural Networks? , 2018, ICLR.

[21]  J. W. Silverstein,et al.  On the empirical distribution of eigenvalues of a class of large dimensional random matrices , 1995 .

[22]  Stephan Günnemann,et al.  Certifiable Robustness of Graph Convolutional Networks under Structure Perturbations , 2020, KDD.

[23]  Stephan Günnemann,et al.  Adversarial Attacks on Graph Neural Networks via Meta Learning , 2019, ICLR.

[24]  Stephan Günnemann,et al.  Deep Gaussian Embedding of Graphs: Unsupervised Inductive Learning via Ranking , 2017, ICLR.

[25]  Philip S. Yu,et al.  Adversarial Attack and Defense on Graph Data: A Survey , 2018 .

[26]  W. Hachem,et al.  Deterministic equivalents for certain functionals of large random matrices , 2005, math/0507172.

[27]  Z. Fan,et al.  Spectra of the Conjugate Kernel and Neural Tangent Kernel for linear-width neural networks , 2020, NeurIPS.

[28]  Wenwu Zhu,et al.  Power up! Robust Graph Convolutional Network via Graph Powering , 2019, AAAI.

[29]  Martin Grohe,et al.  Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks , 2018, AAAI.