Single-Node Attack for Fooling Graph Neural Networks

Graph neural networks (GNNs) have shown broad applicability in a variety of domains. Some of these domains, such as social networks and product recommendations, are fertile ground for malicious users and behavior. In this paper, we show that GNNs are vulnerable to the extremely limited scenario of a single-node adversarial example, where the node cannot be picked by the attacker. That is, an attacker can force the GNN to classify any target node to a chosen label by only slightly perturbing another single arbitrary node in the graph, even when not being able to pick that specific attacker node. When the adversary is allowed to pick a specific attacker node, the attack is even more effective. We show that this attack is effective across various GNN types, such as GraphSAGE, GCN, GAT, and GIN, across a variety of real-world datasets, and as a targeted and a non-targeted attack. Our code is available at this https URL .

[1]  Stephan Günnemann,et al.  Certifiable Robustness to Graph Perturbations , 2019, NeurIPS.

[2]  Lise Getoor,et al.  Query-driven Active Surveying for Collective Classification , 2012 .

[3]  Andrew McCallum,et al.  Automating the Construction of Internet Portals with Machine Learning , 2000, Information Retrieval.

[4]  Tat-Seng Chua,et al.  Graph Adversarial Training: Dynamically Regularizing Based on Graph Structure , 2019, IEEE Transactions on Knowledge and Data Engineering.

[5]  Alessio Micheli,et al.  Neural Network for Graphs: A Contextual Constructive Approach , 2009, IEEE Transactions on Neural Networks.

[6]  Stephan Gunnemann,et al.  Adversarial Attacks on Graph Neural Networks via Meta Learning , 2019, ICLR.

[7]  Max Welling,et al.  Semi-Supervised Classification with Graph Convolutional Networks , 2016, ICLR.

[8]  Max Welling,et al.  Modeling Relational Data with Graph Convolutional Networks , 2017, ESWC.

[9]  Liming Zhu,et al.  Adversarial Examples on Graph Data: Deep Insights into Attack and Defense , 2019 .

[10]  Zhichun Wang,et al.  Cross-lingual Knowledge Graph Alignment via Graph Convolutional Networks , 2018, EMNLP.

[11]  Talal Rahwan,et al.  Hiding individuals and communities in a social network , 2016, Nature Human Behaviour.

[12]  Xiang Zhang,et al.  GNNGuard: Defending Graph Neural Networks against Adversarial Attacks , 2020, NeurIPS.

[13]  Le Song,et al.  Know-Evolve: Deep Temporal Reasoning for Dynamic Knowledge Graphs , 2017, ICML.

[14]  Bo Yuan,et al.  Graph Universal Adversarial Attacks: A Few Bad Actors Ruin Graph Learning Models , 2020, ArXiv.

[15]  Binghui Wang,et al.  Certified Robustness of Graph Neural Networks against Adversarial Structural Perturbation , 2020, ArXiv.

[16]  Uri Alon,et al.  Adversarial examples for models of code , 2020, Proc. ACM Program. Lang..

[17]  Aleksander Madry,et al.  Adversarial Examples Are Not Bugs, They Are Features , 2019, NeurIPS.

[18]  Stephan Günnemann,et al.  Certifiable Robustness of Graph Convolutional Networks under Structure Perturbations , 2020, KDD.

[19]  Sijia Liu,et al.  Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective , 2019, IJCAI.

[20]  Jan Eric Lenssen,et al.  Fast Graph Representation Learning with PyTorch Geometric , 2019, ArXiv.

[21]  Suhang Wang,et al.  Non-target-specific Node Injection Attacks on Graph Neural Networks: A Hierarchical Reinforcement Learning Approach , 2020 .

[22]  Jeffrey Pennington,et al.  GloVe: Global Vectors for Word Representation , 2014, EMNLP.

[23]  Wenbing Huang,et al.  A Restricted Black-Box Adversarial Framework Towards Attacking Graph Embedding Models , 2019, AAAI.

[24]  Stephan Günnemann,et al.  Adversarial Attacks on Node Embeddings via Graph Poisoning , 2018, ICML.

[25]  Jure Leskovec,et al.  Inductive Representation Learning on Large Graphs , 2017, NIPS.

[26]  Jure Leskovec,et al.  Open Graph Benchmark: Datasets for Machine Learning on Graphs , 2020, NeurIPS.

[27]  Jure Leskovec,et al.  How Powerful are Graph Neural Networks? , 2018, ICLR.

[28]  Alán Aspuru-Guzik,et al.  Convolutional Networks on Graphs for Learning Molecular Fingerprints , 2015, NIPS.

[29]  Aleksander Madry,et al.  Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.

[30]  Stephan Günnemann,et al.  Adversarial Attacks on Neural Networks for Graph Data , 2018, KDD.

[31]  Qi Xuan,et al.  Fast Gradient Attack on Network Embedding , 2018, ArXiv.

[32]  David A. Wagner,et al.  Audio Adversarial Examples: Targeted Attacks on Speech-to-Text , 2018, 2018 IEEE Security and Privacy Workshops (SPW).

[33]  Jure Leskovec,et al.  Learning to Discover Social Circles in Ego Networks , 2012, NIPS.

[34]  Ah Chung Tsoi,et al.  The Graph Neural Network Model , 2009, IEEE Transactions on Neural Networks.

[35]  Richard S. Zemel,et al.  Gated Graph Sequence Neural Networks , 2015, ICLR.

[36]  Le Song,et al.  Adversarial Attack on Graph Structured Data , 2018, ICML.

[37]  Hongwei Jin,et al.  Latent Adversarial Training of Graph Convolution Networks , 2019 .

[38]  Pietro Liò,et al.  Graph Attention Networks , 2017, ICLR.

[39]  Lise Getoor,et al.  Collective Classification in Network Data , 2008, AI Mag..

[40]  Pablo Barceló,et al.  Logical Expressiveness of Graph Neural Networks , 2019 .

[41]  Dawn Song,et al.  Imitation Attacks and Defenses for Black-box Machine Translation Systems , 2020, EMNLP.

[42]  Stephan Günnemann,et al.  Pitfalls of Graph Neural Network Evaluation , 2018, ArXiv.