Spatio-Temporal Sparsification for General Robust Graph Convolution Networks

Graph Neural Networks (GNNs) have attracted increasing attention due to its successful applications on various graphstructure data. However, recent studies have shown that adversarial attacks are threatening the functionality of GNNs. Although numerous works have been proposed to defend adversarial attacks from various perspectives, most of them can be robust against the attacks only on specific scenarios. To address this shortage of robust generalization, we propose to defend the adversarial attacks on GNN through applying the Spatio-Temporal sparsification (called ST-Sparse) on the GNN hidden node representation. ST-Sparse is similar to the Dropout regularization in spirit. Through intensive experiment evaluation with GCN as the target GNN model, we identify the benefits of ST-Sparse as follows: (1) ST-Sparse shows the defense performance improvement in most cases, as it can effectively increase the robust accuracy by up to 6% improvement; (2) ST-Sparse illustrates its robust generalization capability by integrating with the existing defense methods, similar to the integration of Dropout into various deep learning models as a standard regularization technique; (3) ST-Sparse also shows its ordinary generalization capability on clean datasets, in that ST-SparseGCN (the integration of ST-Sparse and the original GCN) even outperform the original GCN, while the other three representative defense methods are inferior to the original GCN.

[1]  Aleksander Madry,et al.  Robustness May Be at Odds with Accuracy , 2018, ICLR.

[2]  Talal Rahwan,et al.  Hiding individuals and communities in a social network , 2016, Nature Human Behaviour.

[3]  Tat-Seng Chua,et al.  Graph Adversarial Training: Dynamically Regularizing Based on Graph Structure , 2019, IEEE Transactions on Knowledge and Data Engineering.

[4]  Stephan Gunnemann,et al.  Adversarial Attacks on Graph Neural Networks via Meta Learning , 2019, ICLR.

[5]  Stephan Günnemann,et al.  Certifiable Robustness to Graph Perturbations , 2019, NeurIPS.

[6]  Nitish Srivastava,et al.  Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..

[7]  Saba A. Al-Sayouri,et al.  All You Need Is Low (Rank): Defending Against Adversarial Attacks on Graphs , 2020, WSDM.

[8]  Wenwu Zhu,et al.  Deep Learning on Graphs: A Survey , 2018, IEEE Transactions on Knowledge and Data Engineering.

[9]  Liming Zhu,et al.  Adversarial Examples on Graph Data: Deep Insights into Attack and Defense , 2019 .

[10]  Zhiyuan Liu,et al.  Graph Neural Networks: A Review of Methods and Applications , 2018, AI Open.

[11]  Max Welling,et al.  Semi-Supervised Classification with Graph Convolutional Networks , 2016, ICLR.

[12]  Philip S. Yu,et al.  A Comprehensive Survey on Graph Neural Networks , 2019, IEEE Transactions on Neural Networks and Learning Systems.

[13]  Subutai Ahmad,et al.  How Can We Be So Dense? The Benefits of Using Highly Sparse Representations , 2019, ArXiv.

[14]  Changshui Zhang,et al.  Sparse DNNs with Improved Adversarial Robustness , 2018, NeurIPS.

[15]  Sijia Liu,et al.  Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective , 2019, IJCAI.

[16]  Philip S. Yu,et al.  Adversarial Attack and Defense on Graph Data: A Survey , 2018 .

[17]  Federico Zaiter,et al.  The Search for Sparse, Robust Neural Networks , 2019, ArXiv.

[18]  Jiliang Tang,et al.  Adversarial Attacks and Defenses in Images, Graphs and Text: A Review , 2019, International Journal of Automation and Computing.

[19]  Wenwu Zhu,et al.  Robust Graph Convolutional Networks Against Adversarial Attacks , 2019, KDD.

[20]  Le Song,et al.  Adversarial Attack on Graph Structured Data , 2018, ICML.

[21]  Stephan Günnemann,et al.  Adversarial Attacks on Neural Networks for Graph Data , 2018, KDD.

[22]  Lise Getoor,et al.  Collective Classification in Network Data , 2008, AI Mag..

[23]  Xiang Lin,et al.  Can Adversarial Network Attack be Defended? , 2019, ArXiv.

[24]  Jiliang Tang,et al.  DeepRobust: A PyTorch Library for Adversarial Attacks and Defenses , 2020, ArXiv.

[25]  Jan Eric Lenssen,et al.  Fast Graph Representation Learning with PyTorch Geometric , 2019, ArXiv.

[26]  Philip S. Yu,et al.  Adversarial Defense Framework for Graph Neural Network , 2019, ArXiv.

[27]  Peilin Zhong,et al.  Enhancing Adversarial Defense by k-Winners-Take-All , 2020, ICLR.

[28]  Bo Zong,et al.  Robust Graph Representation Learning via Neural Sparsification , 2020, ICML.

[29]  Jiliang Tang,et al.  Adversarial Attacks and Defenses on Graphs: A Review and Empirical Study , 2020, ArXiv.

[30]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.