Adversarial Detection on Graph Structured Data

Graph Neural Networks (GNNs) has achieved tremendous development on perceptual tasks in recent years, such as node classification, graph classification, link prediction, etc. However, recent studies show that deep learning models of GNNs are incredibly vulnerable to adversarial attacks, so enhancing the robustness of such models remains a significant challenge. In this paper, we propose a subgraph based adversarial sample detection against adversarial perturbations. To the best of our knowledge, this is the first work on the adversarial detection in the deep-learning graph classification models, using the Subgraph Networks (SGN) to restructure the graph's features. Moreover, we develop the joint adversarial detector to cope with the more complicated and unknown attacks. Specifically, we first explain how adversarial attacks can easily fool the models and then show that the SGN can facilitate the distinction of adversarial examples generated by state-of-the-art attacks. We experiment on five real-world graph datasets using three different kinds of attack strategies on graph classification. Our empirical results show the effectiveness of our detection method and further explain the SGN's capacity to tell apart malicious graphs.

[1]  Prasenjit Mitra,et al.  Transferring Robustness for Graph Neural Network Against Poisoning Attacks , 2019, WSDM.

[2]  Qi Xuan,et al.  Target Defense Against Link-Prediction-Based Attacks via Evolutionary Perturbations , 2018, IEEE Transactions on Knowledge and Data Engineering.

[3]  Min Du,et al.  Time-aware Gradient Attack on Dynamic Network Link Prediction , 2019, ArXiv.

[4]  Wenwu Zhu,et al.  Robust Graph Convolutional Networks Against Adversarial Attacks , 2019, KDD.

[5]  Stephan Gunnemann,et al.  Adversarial Attacks on Graph Neural Networks via Meta Learning , 2019, ICLR.

[6]  Stephan Günnemann,et al.  Adversarial Attacks on Neural Networks for Graph Data , 2018, KDD.

[7]  Max Welling,et al.  Semi-Supervised Classification with Graph Convolutional Networks , 2016, ICLR.

[8]  Bo Zeng,et al.  Adversarial Attack on Hierarchical Graph Pooling Neural Networks , 2020, ArXiv.

[9]  Richard S. Zemel,et al.  Gated Graph Sequence Neural Networks , 2015, ICLR.

[10]  Binghui Wang,et al.  Backdoor Attacks to Graph Neural Networks , 2020, ArXiv.

[11]  Georgios B. Giannakis,et al.  GraphSAC: Detecting anomalies in large-scale graphs , 2019, ArXiv.

[12]  Xavier Bresson,et al.  Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering , 2016, NIPS.

[13]  Qi Xuan,et al.  Fast Gradient Attack on Network Embedding , 2018, ArXiv.

[14]  Qi Xuan,et al.  Subgraph Networks With Application to Structural Feature Space Expansion , 2019, IEEE Transactions on Knowledge and Data Engineering.

[15]  Le Song,et al.  Adversarial Attack on Graph Structured Data , 2018, ICML.

[16]  Jure Leskovec,et al.  Inductive Representation Learning on Large Graphs , 2017, NIPS.

[17]  Le Song,et al.  Characterizing Malicious Edges targeting on Graph Neural Networks , 2018 .

[18]  Stephan Günnemann,et al.  Certifiable Robustness to Graph Perturbations , 2019, NeurIPS.