Scalable attack on graph data by injecting vicious nodes

Recent studies have shown that graph convolution networks (GCNs) are vulnerable to carefully designed attacks, which aim to cause misclassification of a specific node on the graph with unnoticeable perturbations. However, a vast majority of existing works cannot handle large-scale graphs because of their high time complexity. Additionally, existing works mainly focus on manipulating existing nodes on the graph, while in practice, attackers usually do not have the privilege to modify information of existing nodes. In this paper, we develop a more scalable framework named Approximate Fast Gradient Sign Method which considers a more practical attack scenario where adversaries can only inject new vicious nodes to the graph while having no control over the original graph. Methodologically, we provide an approximation strategy to linearize the model we attack and then derive an approximate closed-from solution with a lower time cost. To have a fair comparison with existing attack methods that manipulate the original graph, we adapt them to the new attack scenario by injecting vicious nodes. Empirical experimental results show that our proposed attack method can significantly reduce the classification accuracy of GCNs and is much faster than existing methods without jeopardizing the attack performance. We have open-sourced the code of our method https://github.com/wangjhgithub/AFGSM.

[1]  Jonathan Masci,et al.  Geometric Deep Learning on Graphs and Manifolds Using Mixture Model CNNs , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[2]  Peter Dayan,et al.  Q-learning , 1992, Machine Learning.

[3]  Stephan Günnemann,et al.  Adversarial Attacks on Node Embeddings , 2018, ICML 2019.

[4]  Max Welling,et al.  Semi-Supervised Classification with Graph Convolutional Networks , 2016, ICLR.

[5]  Jure Leskovec,et al.  Graph Convolutional Neural Networks for Web-Scale Recommender Systems , 2018, KDD.

[6]  Svetha Venkatesh,et al.  Column Networks for Collective Classification , 2016, AAAI.

[7]  Bo An,et al.  Data Poisoning Attacks on Multi-Task Relationship Learning , 2018, AAAI.

[8]  Cynthia A. Phillips,et al.  The network inhibition problem , 1993, STOC.

[9]  Yizheng Chen,et al.  Practical Attacks Against Graph-based Clustering , 2017, CCS.

[10]  Danai Koutra,et al.  Graph based anomaly detection and description: a survey , 2014, Data Mining and Knowledge Discovery.

[11]  Zhiyuan Liu,et al.  Graph Neural Networks: A Review of Methods and Applications , 2018, AI Open.

[12]  Rajeev Rastogi,et al.  Recommendations to boost content spread in social networks , 2012, WWW.

[13]  Graham Cormode,et al.  Node Classification in Social Networks , 2011, Social Network Data Analytics.

[14]  Aleksander Madry,et al.  Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.

[15]  R. Kevin Wood,et al.  Shortest‐path network interdiction , 2002, Networks.

[16]  D. Hand,et al.  Unsupervised Profiling Methods for Fraud Detection , 2002 .

[17]  Stephan Günnemann,et al.  Adversarial Attacks on Neural Networks for Graph Data , 2018, KDD.

[18]  Le Song,et al.  Adversarial Attack on Graph Structured Data , 2018, ICML.

[19]  Jonathon Shlens,et al.  Explaining and Harnessing Adversarial Examples , 2014, ICLR.

[20]  Joan Bruna,et al.  Intriguing properties of neural networks , 2013, ICLR.

[21]  Stephan Gunnemann,et al.  Adversarial Attacks on Graph Neural Networks via Meta Learning , 2019, ICLR.

[22]  Jun Zhu,et al.  Boosting Adversarial Attacks with Momentum , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[23]  Peter Dayan,et al.  Technical Note: Q-Learning , 2004, Machine Learning.

[24]  Philip S. Yu,et al.  A Comprehensive Survey on Graph Neural Networks , 2019, IEEE Transactions on Neural Networks and Learning Systems.

[25]  Stephan Günnemann,et al.  Adversarial Attacks on Node Embeddings via Graph Poisoning , 2018, ICML.

[26]  Axel van Lamsweerde,et al.  Learning machine learning , 1991 .

[27]  David A. Wagner,et al.  Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.

[28]  David A. Wagner,et al.  Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).

[29]  Jonathan Masci,et al.  Geometric deep learning , 2016, SIGGRAPH ASIA Courses.

[30]  Andrew McCallum,et al.  Automating the Construction of Internet Portals with Machine Learning , 2000, Information Retrieval.

[31]  Enhong Chen,et al.  Learning Deep Representations for Graph Clustering , 2014, AAAI.

[32]  Chengqi Zhang,et al.  Attributed network embedding via subspace discovery , 2019, Data Mining and Knowledge Discovery.

[33]  Emmanuel Müller,et al.  Focused clustering and outlier detection in large attributed graphs , 2014, KDD.

[34]  Cao Xiao,et al.  FastGCN: Fast Learning with Graph Convolutional Networks via Importance Sampling , 2018, ICLR.

[35]  Charu C. Aggarwal,et al.  Node Classification in Signed Social Networks , 2016, SDM.

[36]  Lise Getoor,et al.  Collective Classification in Network Data , 2008, AI Mag..

[37]  Vasant Honavar,et al.  Node Injection Attacks on Graphs via Reinforcement Learning , 2019, ArXiv.

[38]  Balázs Csanád Csáji,et al.  PageRank optimization by edge selection , 2009, Discret. Appl. Math..

[39]  Jure Leskovec,et al.  Inductive Representation Learning on Large Graphs , 2017, NIPS.

[40]  Kevin Chen-Chuan Chang,et al.  A Comprehensive Survey of Graph Embedding: Problems, Techniques, and Applications , 2017, IEEE Transactions on Knowledge and Data Engineering.

[41]  Steven Skiena,et al.  DeepWalk: online learning of social representations , 2014, KDD.