CoG: a Two-View Co-training Framework for Defending Adversarial Attacks on Graph

Graph neural networks (GNNs) exhibit remarkable performance in graph data analysis. However, the robustness of GNN models remains a challenge. As a result, they are not reliable enough to be deployed in critical applications. Recent studies demonstrate that GNNs could be easily fooled with adversarial perturbations, especially structural perturbations. Such vulnerability is attributed to the model’s excessive dependence on the structure information to make predictions. To achieve better robustness, it is desirable to build the prediction of GNNs with more comprehensive features. Graph data, in most cases, has two views of information, namely structure information and feature information. In this paper, we propose CoG, a simple yet effective co-training framework to combine these two views for the purpose of robustness. CoG trains sub-models from the feature view and the structure view independently and allows them to distill knowledge from each other by adding their most confident unlabeled data into the training set. The orthogonality of these two views diversifies the sub-models, thus enhancing the robustness of their ensemble. We evaluate our framework on three popular datasets, and results show that CoG significantly improves the robustness of graph models against adversarial attacks without sacrificing their performance on clean data. We also show that CoG still achieves good robustness when both node features and graph structures are perturbed.

[1]  Lise Getoor,et al.  Collective Classification in Network Data , 2008, AI Mag..

[2]  Bo Zong,et al.  Learning to Drop: Robust Graph Neural Network via Topological Denoising , 2020, WSDM.

[3]  Jayaraman J. Thiagarajan,et al.  Uncertainty-Matching Graph Neural Networks to Defend Against Poisoning Attacks , 2020, AAAI.

[4]  Rich Caruana,et al.  Predicting good probabilities with supervised learning , 2005, ICML.

[5]  Stephen E. Fienberg,et al.  The Comparison and Evaluation of Forecasters. , 1983 .

[6]  Max Welling,et al.  Semi-Supervised Classification with Graph Convolutional Networks , 2016, ICLR.

[7]  Stephan Günnemann,et al.  Adversarial Attacks on Neural Networks for Graph Data , 2018, KDD.

[8]  Zhi-Hua Zhou,et al.  Analyzing Co-training Style Algorithms , 2007, ECML.

[9]  U. Feige,et al.  Spectral Graph Theory , 2015 .

[10]  Stephan Gunnemann,et al.  Adversarial Attacks on Graph Neural Networks via Meta Learning , 2019, ICLR.

[11]  Suhang Wang,et al.  Graph Structure Learning for Robust Graph Neural Networks , 2020, KDD.

[12]  David M. W. Powers,et al.  Evaluation: from precision, recall and F-measure to ROC, informedness, markedness and correlation , 2011, ArXiv.

[13]  Stephan Günnemann,et al.  Adversarial Attacks on Node Embeddings via Graph Poisoning , 2018, ICML.

[14]  Ning Chen,et al.  Improving Adversarial Robustness via Promoting Ensemble Diversity , 2019, ICML.

[15]  Jure Leskovec,et al.  Inductive Representation Learning on Large Graphs , 2017, NIPS.

[16]  Oleksandr Makeyev,et al.  Neural network with ensembles , 2010, The 2010 International Joint Conference on Neural Networks (IJCNN).

[17]  Jiliang Tang,et al.  DeepRobust: A PyTorch Library for Adversarial Attacks and Defenses , 2020, ArXiv.

[18]  Pietro Liò,et al.  Graph Attention Networks , 2017, ICLR.

[19]  Kilian Q. Weinberger,et al.  On Calibration of Modern Neural Networks , 2017, ICML.

[20]  Jan Eric Lenssen,et al.  Fast Graph Representation Learning with PyTorch Geometric , 2019, ArXiv.

[21]  Ludmila I. Kuncheva,et al.  Measures of Diversity in Classifier Ensembles and Their Relationship with the Ensemble Accuracy , 2003, Machine Learning.

[22]  Le Song,et al.  Adversarial Attack on Graph Structured Data , 2018, ICML.

[23]  Leo Breiman,et al.  Bagging Predictors , 1996, Machine Learning.

[24]  Saba A. Al-Sayouri,et al.  All You Need Is Low (Rank): Defending Against Adversarial Attacks on Graphs , 2020, WSDM.

[25]  Ulrike von Luxburg,et al.  A tutorial on spectral clustering , 2007, Stat. Comput..

[26]  Moinuddin K. Qureshi,et al.  Improving Adversarial Robustness of Ensembles with Diversity Training , 2019, ArXiv.

[27]  Jiliang Tang,et al.  Node Similarity Preserving Graph Convolutional Networks , 2020, WSDM.

[28]  Dan Boneh,et al.  The Space of Transferable Adversarial Examples , 2017, ArXiv.

[29]  Liming Zhu,et al.  Adversarial Examples on Graph Data: Deep Insights into Attack and Defense , 2019 .

[30]  Hai Li,et al.  DVERGE: Diversifying Vulnerabilities for Enhanced Robust Generation of Ensembles , 2020, NeurIPS.

[31]  Dinghao Wu,et al.  Enhancing Robustness of Graph Convolutional Networks via Dropping Graph Connections , 2020, ECML/PKDD.

[32]  Sijia Liu,et al.  Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective , 2019, IJCAI.

[33]  Andrew McCallum,et al.  Automating the Construction of Internet Portals with Machine Learning , 2000, Information Retrieval.

[34]  M. Zitnik,et al.  GNNGuard: Defending Graph Neural Networks against Adversarial Attacks , 2020, NeurIPS.

[35]  Avrim Blum,et al.  The Bottleneck , 2021, Monopsony Capitalism.