Adversarial Attacks on Neural Networks for Graph Data

Deep learning models for graphs have achieved strong performance for the task of node classification. Despite their proliferation, currently there is no study of their robustness to adversarial attacks. Yet, in domains where they are likely to be used, e.g. the web, adversaries are common. Can deep learning models for graphs be easily fooled? In this work, we introduce the first study of adversarial attacks on attributed graphs, specifically focusing on models exploiting ideas of graph convolutions. In addition to attacks at test time, we tackle the more challenging class of poisoning/causative attacks, which focus on the training phase of a machine learning model.We generate adversarial perturbations targeting the node's features and the graph structure, thus, taking the dependencies between instances in account. Moreover, we ensure that the perturbations remain unnoticeable by preserving important data characteristics. To cope with the underlying discrete domain we propose an efficient algorithm Nettack exploiting incremental computations. Our experimental study shows that accuracy of node classification significantly drops even when performing only few perturbations. Even more, our attacks are transferable: the learned attacks generalize to other state-of-the-art node classification models and unsupervised approaches, and likewise are successful even when only limited knowledge about the graph is given.

[1]  Stephan Gunnemann,et al.  Certifiable Robustness and Robust Training for Graph Convolutional Networks , 2019, Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining.

[2]  Stephan Gunnemann,et al.  Adversarial Attacks on Graph Neural Networks via Meta Learning , 2019, ICLR.

[3]  Stephan Günnemann,et al.  Predict then Propagate: Graph Neural Networks meet Personalized PageRank , 2018, ICLR.

[4]  Le Song,et al.  Adversarial Attack on Graph Structured Data , 2018, ICML.

[5]  Dan Wang,et al.  Adversarial Network Embedding , 2017, AAAI.

[6]  Kevin Chen-Chuan Chang,et al.  A Comprehensive Survey of Graph Embedding: Problems, Techniques, and Applications , 2017, IEEE Transactions on Knowledge and Data Engineering.

[7]  Stephan Günnemann,et al.  Deep Gaussian Embedding of Graphs: Unsupervised Inductive Learning via Ranking , 2017, ICLR.

[8]  Stephan Günnemann,et al.  Bayesian Robust Attributed Graph Clustering: Joint Learning of Partial Anomalies and Group Structure , 2018, AAAI.

[9]  Bo An,et al.  Data Poisoning Attacks on Multi-Task Relationship Learning , 2018, AAAI.

[10]  Alexander Dekhtyar,et al.  Information Retrieval , 2018, Lecture Notes in Computer Science.

[11]  Patrick D. McDaniel,et al.  Adversarial Examples for Malware Detection , 2017, ESORICS.

[12]  Yizheng Chen,et al.  Practical Attacks Against Graph-based Clustering , 2017, CCS.

[13]  Stephan Günnemann,et al.  Robust Spectral Clustering for Noisy Data: Modeling Sparse Corruptions Improves Latent Embeddings , 2017, KDD.

[14]  Jure Leskovec,et al.  Inductive Representation Learning on Large Graphs , 2017, NIPS.

[15]  Dan Boneh,et al.  The Space of Transferable Adversarial Examples , 2017, ArXiv.

[16]  Samuel S. Schoenholz,et al.  Neural Message Passing for Quantum Chemistry , 2017, ICML.

[17]  Jonathan Masci,et al.  Geometric Deep Learning on Graphs and Manifolds Using Mixture Model CNNs , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[18]  Svetha Venkatesh,et al.  Column Networks for Collective Classification , 2016, AAAI.

[19]  Max Welling,et al.  Semi-Supervised Classification with Graph Convolutional Networks , 2016, ICLR.

[20]  Christos Faloutsos,et al.  ZooBP: Belief Propagation for Heterogeneous Networks , 2017, Proc. VLDB Endow..

[21]  Yevgeniy Vorobeychik,et al.  Data Poisoning Attacks on Factorization-Based Collaborative Filtering , 2016, NIPS.

[22]  A. El-Sayed,et al.  Modeling Multivariate Correlated Binary Data , 2016 .

[23]  Jure Leskovec,et al.  node2vec: Scalable Feature Learning for Networks , 2016, KDD.

[24]  Xavier Bresson,et al.  Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering , 2016, NIPS.

[25]  Ananthram Swami,et al.  The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).

[26]  Christos Faloutsos,et al.  BIRDNEST: Bayesian Inference for Ratings-Fraud Detection , 2015, SDM.

[27]  Alessandro Bessi,et al.  Two samples test for discrete power-law distributions , 2015, 1503.00643.

[28]  Xiaojin Zhu,et al.  Using Machine Teaching to Identify Optimal Training-Set Attacks on Machine Learners , 2015, AAAI.

[29]  Jonathon Shlens,et al.  Explaining and Harnessing Adversarial Examples , 2014, ICLR.

[30]  Fabio Roli,et al.  Security Evaluation of Pattern Classifiers under Attack , 2014, IEEE Transactions on Knowledge and Data Engineering.

[31]  Steven Skiena,et al.  DeepWalk: online learning of social representations , 2014, KDD.

[32]  Joan Bruna,et al.  Intriguing properties of neural networks , 2013, ICLR.

[33]  Ralf Klinkenberg,et al.  Data Classification: Algorithms and Applications , 2014 .

[34]  Daniel Lowd,et al.  Convex Adversarial Collective Classification , 2013, ICML.

[35]  Mark E. J. Newman,et al.  Power-Law Distributions in Empirical Data , 2007, SIAM Rev..

[36]  Lise Getoor,et al.  Collective Classification in Network Data , 2008, AI Mag..

[37]  Zoubin Ghahramani,et al.  Proceedings of the 24th international conference on Machine learning , 2007, ICML 2007.

[38]  Alexander Zien,et al.  Semi-Supervised Learning , 2006 .

[39]  Bernhard Schölkopf,et al.  Semi-Supervised Learning (Adaptive Computation and Machine Learning) , 2006 .

[40]  Lada A. Adamic,et al.  The political blogosphere and the 2004 U.S. election: divided they blog , 2005, LinkKDD '05.

[41]  Andrew McCallum,et al.  Automating the Construction of Internet Portals with Machine Learning , 2000, Information Retrieval.