Model Stealing Attacks Against Inductive Graph Neural Networks

Many real-world data come in the form of graphs. Graph neural networks (GNNs), a new family of machine learning (ML) models, have been proposed to fully leverage graph data to build powerful applications. In particular, the inductive GNNs, which can generalize to unseen data, become mainstream in this direction. Machine learning models have shown great potential in various tasks and have been deployed in many real-world scenarios. To train a good model, a large amount of data as well as computational resources are needed, leading to valuable intellectual property. Previous research has shown that ML models are prone to model stealing attacks, which aim to steal the functionality of the target models. However, most of them focus on the models trained with images and texts. On the other hand, little attention has been paid to models trained with graph data, i.e., GNNs. In this paper, we fill the gap by proposing the first model stealing attacks against inductive GNNs. We systematically define the threat model and propose six attacks based on the adversary’s background knowledge and the responses of the target models. Our evaluation on six benchmark datasets shows that the proposed model stealing attacks against GNNs achieve promising performance.1

[1]  Max Welling,et al.  Semi-Supervised Classification with Graph Convolutional Networks , 2016, ICLR.

[2]  Michael Backes,et al.  Stealing Links from Graph Neural Networks , 2020, USENIX Security Symposium.

[3]  David A. Wagner,et al.  Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).

[4]  Tat-Seng Chua,et al.  Graph Adversarial Training: Dynamically Regularizing Based on Graph Structure , 2019, IEEE Transactions on Knowledge and Data Engineering.

[5]  Shouling Ji,et al.  Graph Backdoor , 2020, USENIX Security Symposium.

[6]  Georgios B. Giannakis,et al.  GraphSAC: Detecting anomalies in large-scale graphs , 2019, ArXiv.

[7]  Yang Zhang,et al.  Quantifying and Mitigating Privacy Risks of Contrastive Learning , 2021, CCS.

[8]  Jing Zhang,et al.  GNNVis: Visualize Large-Scale Data by Learning a Graph Neural Network Representation , 2020, CIKM.

[9]  Cordelia Schmid,et al.  White-box vs Black-box: Bayes Optimal Strategies for Membership Inference , 2019, ICML.

[10]  Mario Fritz,et al.  ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models , 2018, NDSS.

[11]  Yanfang Ye,et al.  Heterogeneous Graph Attention Network , 2019, WWW.

[12]  Peng Cui,et al.  Interpreting and Unifying Graph Neural Networks with An Optimization Framework , 2021, WWW.

[14]  Baochun Li,et al.  Adversarial Attacks on Link Prediction Algorithms Based on Graph Neural Networks , 2020, AsiaCCS.

[15]  Lingfan Yu,et al.  Deep Graph Library: A Graph-Centric, Highly-Performant Package for Graph Neural Networks. , 2019 .

[16]  Le Song,et al.  Adversarial Attack on Graph Structured Data , 2018, ICML.

[17]  Ankur P. Parikh,et al.  Thieves on Sesame Street! Model Extraction of BERT-based APIs , 2019, ICLR.

[18]  Stephan Gunnemann,et al.  Adversarial Attacks on Graph Neural Networks via Meta Learning , 2019, ICLR.

[19]  Michael Backes,et al.  MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples , 2019, CCS.

[20]  Lise Getoor,et al.  Collective Classification in Network Data , 2008, AI Mag..

[21]  Shirui Pan,et al.  Model Extraction Attacks on Graph Neural Networks: Taxonomy and Realisation , 2020, AsiaCCS.

[22]  Qiang Li,et al.  Adversarial Training Methods for Network Embedding , 2019, WWW.

[23]  Stephan Gunnemann,et al.  Certifiable Robustness and Robust Training for Graph Convolutional Networks , 2019, Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining.

[24]  Lei Chen,et al.  Learning to Transfer Graph Embeddings for Inductive Graph based Recommendation , 2020, SIGIR.

[25]  Fan Zhang,et al.  Stealing Machine Learning Models via Prediction APIs , 2016, USENIX Security Symposium.

[26]  Liming Zhu,et al.  Adversarial Examples on Graph Data: Deep Insights into Attack and Defense , 2019 .

[27]  David Berthelot,et al.  High Accuracy and High Fidelity Extraction of Neural Networks , 2020, USENIX Security Symposium.

[28]  Philip S. Yu,et al.  Adversarial Attack and Defense on Graph Data: A Survey , 2018 .

[29]  Suhang Wang,et al.  Non-target-specific Node Injection Attacks on Graph Neural Networks: A Hierarchical Reinforcement Learning Approach , 2020 .

[30]  Colin Raffel,et al.  Extracting Training Data from Large Language Models , 2020, USENIX Security Symposium.

[31]  Ziqiang Shi,et al.  Link Prediction Adversarial Attack Via Iterative Gradient Attack , 2020, IEEE Transactions on Computational Social Systems.

[32]  Zhiyuan Liu,et al.  Graph Neural Networks: A Review of Methods and Applications , 2018, AI Open.

[33]  Geoffrey E. Hinton,et al.  Visualizing Data using t-SNE , 2008 .

[34]  Yu Chen,et al.  Iterative Deep Graph Learning for Graph Neural Networks: Better and Robust Node Embeddings , 2019, NeurIPS.

[35]  Zhanxing Zhu,et al.  Virtual Adversarial Training on Graph Convolutional Networks in Node Classification , 2019, PRCV.

[36]  Seong Joon Oh,et al.  Towards Reverse-Engineering Black-Box Neural Networks , 2017, ICLR.

[37]  Tianjian Chen,et al.  Federated Machine Learning: Concept and Applications , 2019 .

[38]  Steven Skiena,et al.  DeepWalk: online learning of social representations , 2014, KDD.

[39]  Michael Backes,et al.  Node-Level Membership Inference Attacks Against Graph Neural Networks , 2021, ArXiv.

[40]  C. Lee Giles,et al.  CiteSeer: an automatic citation indexing system , 1998, DL '98.

[41]  Jiliang Tang,et al.  Adversarial Attacks and Defenses on Graphs: A Review and Empirical Study , 2020, ArXiv.

[42]  Hongwei Jin,et al.  Latent Adversarial Training of Graph Convolution Networks , 2019 .

[43]  Vitaly Shmatikov,et al.  Auditing Data Provenance in Text-Generation Models , 2018, KDD.

[44]  Jure Leskovec,et al.  Inductive Representation Learning on Large Graphs , 2017, NIPS.

[45]  Stephan Günnemann,et al.  Pitfalls of Graph Neural Network Evaluation , 2018, ArXiv.

[46]  Pietro Liò,et al.  Graph Attention Networks , 2017, ICLR.

[47]  Jun Zhu,et al.  Batch Virtual Adversarial Training for Graph Convolutional Networks , 2019, AI Open.

[48]  Virat Shejwalkar,et al.  Quantifying Privacy Leakage in Graph Embedding , 2020, MobiQuitous.

[49]  Junzhou Huang,et al.  FedGraphNN: A Federated Learning System and Benchmark for Graph Neural Networks , 2021, 2104.07145.

[50]  Samuel Marchal,et al.  PRADA: Protecting Against DNN Model Stealing Attacks , 2018, 2019 IEEE European Symposium on Security and Privacy (EuroS&P).

[51]  Dan Boneh,et al.  Ensemble Adversarial Training: Attacks and Defenses , 2017, ICLR.

[52]  Jinyuan Jia,et al.  Backdoor Attacks to Graph Neural Networks , 2020, SACMAT.

[53]  Max Welling,et al.  Graph Convolutional Matrix Completion , 2017, ArXiv.

[54]  Philip S. Yu,et al.  A Comprehensive Survey on Graph Neural Networks , 2019, IEEE Transactions on Neural Networks and Learning Systems.

[55]  Ian Molloy,et al.  Defending Against Neural Network Model Stealing Attacks Using Deceptive Perturbations , 2019, 2019 IEEE Security and Privacy Workshops (SPW).

[56]  Jure Leskovec,et al.  Strategies for Pre-training Graph Neural Networks , 2020, ICLR.

[57]  Sijia Liu,et al.  Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective , 2019, IJCAI.

[58]  Yatao Bian,et al.  Self-Supervised Graph Transformer on Large-Scale Molecular Data , 2020, NeurIPS.

[59]  Michael Backes,et al.  Inference Attacks Against Graph Neural Networks , 2021, USENIX Security Symposium.

[60]  Ilya Mironov,et al.  Cryptanalytic Extraction of Neural Network Models , 2020, CRYPTO.

[61]  Jonathon Shlens,et al.  Explaining and Harnessing Adversarial Examples , 2014, ICLR.

[62]  Ananthram Swami,et al.  Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.

[63]  Tom A. B. Snijders,et al.  Social Network Analysis , 2011, International Encyclopedia of Statistical Science.

[64]  Emiliano De Cristofaro,et al.  ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models , 2021, USENIX Security Symposium.

[65]  Jaewoo Kang,et al.  Self-Attention Graph Pooling , 2019, ICML.

[66]  Jure Leskovec,et al.  How Powerful are Graph Neural Networks? , 2018, ICLR.

[67]  Carl Yang,et al.  Transfer Learning of Graph Neural Networks with Ego-graph Information Maximization , 2020, NeurIPS.

[68]  Jiliang Tang,et al.  Adversarial Attacks and Defenses in Images, Graphs and Text: A Review , 2019, International Journal of Automation and Computing.

[69]  Mario Fritz,et al.  GAN-Leaks: A Taxonomy of Membership Inference Attacks against Generative Models , 2019, CCS.

[70]  Cho-Jui Hsieh,et al.  GraphDefense: Towards Robust Graph Convolutional Networks , 2019, ArXiv.

[71]  Ananthram Swami,et al.  Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).

[72]  Binghui Wang,et al.  Stealing Hyperparameters in Machine Learning , 2018, 2018 IEEE Symposium on Security and Privacy (SP).

[73]  Anton van den Hengel,et al.  Image-Based Recommendations on Styles and Substitutes , 2015, SIGIR.

[74]  Xiang Ao,et al.  Pick and Choose: A GNN-based Imbalanced Learning Approach for Fraud Detection , 2021, WWW.

[75]  Mingjie Sun,et al.  Data Poisoning Attack against Unsupervised Node Embedding Methods , 2018, ArXiv.

[76]  Vijay S. Pande,et al.  Molecular graph convolutions: moving beyond fingerprints , 2016, Journal of Computer-Aided Molecular Design.

[77]  Wenwu Zhu,et al.  Deep Learning on Graphs: A Survey , 2018, IEEE Transactions on Knowledge and Data Engineering.

[78]  Lei Zhang,et al.  Bridging the Gap between Spatial and Spectral Domains: A Survey on Graph Neural Networks , 2020, ArXiv.

[79]  Qiaozhu Mei,et al.  Towards More Practical Adversarial Attacks on Graph Neural Networks , 2020, NeurIPS.

[80]  Arti Ramesh,et al.  Adversarial Model Extraction on Graph Neural Networks , 2019, ArXiv.

[81]  Nicholas Carlini,et al.  Label-Only Membership Inference Attacks , 2020, ICML.

[82]  Jure Leskovec,et al.  Modeling polypharmacy side effects with graph convolutional networks , 2018, bioRxiv.

[83]  Stephan Günnemann,et al.  Adversarial Attacks on Node Embeddings via Graph Poisoning , 2018, ICML.

[84]  Jure Leskovec,et al.  Hierarchical Graph Representation Learning with Differentiable Pooling , 2018, NeurIPS.

[85]  Jure Leskovec,et al.  node2vec: Scalable Feature Learning for Networks , 2016, KDD.

[86]  Palash Goyal,et al.  Graph Embedding Techniques, Applications, and Performance: A Survey , 2017, Knowl. Based Syst..

[87]  Saba A. Al-Sayouri,et al.  All You Need Is Low (Rank): Defending Against Adversarial Attacks on Graphs , 2020, WSDM.

[88]  Vitaly Shmatikov,et al.  Membership Inference Attacks Against Machine Learning Models , 2016, 2017 IEEE Symposium on Security and Privacy (SP).

[89]  Stephan Günnemann,et al.  Certifiable Robustness to Graph Perturbations , 2019, NeurIPS.

[90]  Ryan A. Rossi,et al.  Attention Models in Graphs: A Survey , 2018 .

[91]  Yoshua Bengio,et al.  Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.

[92]  J. Doug Tygar,et al.  Adversarial machine learning , 2019, AISec '11.

[93]  Zheng Li,et al.  Membership Leakage in Label-Only Exposures , 2021, CCS.

[94]  Tribhuvanesh Orekondy,et al.  Knockoff Nets: Stealing Functionality of Black-Box Models , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[95]  Chengqi Zhang,et al.  Tri-Party Deep Network Representation , 2016, IJCAI.

[96]  Stephan Günnemann,et al.  Adversarial Attacks on Neural Networks for Graph Data , 2018, KDD.