Membership Inference Attack on Graph Neural Networks

Graph Neural Networks (GNNs), which generalize traditional deep neural networks or graph data, have achieved state of the art performance on several graph analytical tasks like node classification, link prediction or graph classification. We focus on how trained GNN models could leak information about the member nodes that they were trained on. In particular, we focus on answering the question: given a graph, can we determine which nodes were used for training the GNN model? We operate in the inductive settings for node classification, which means that none of the nodes in the test set (or the non-member nodes) were seen during the training. We propose a simple attack model which is able to distinguish between the member and non-member nodes while just having a black-box access to the model. We experimentally compare the privacy risks of four representative GNN models. Our results show that all the studied GNN models are vulnerable to privacy leakage. While in traditional machine learning models, overfitting is considered the main cause of such leakage, we show that in GNNs the additional structural information is the major contributing factor.

[1]  Somesh Jha,et al.  Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing , 2014, USENIX Security Symposium.

[2]  Mario Fritz,et al.  ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models , 2018, NDSS.

[3]  Fabio Roli,et al.  Poisoning attacks to compromise face templates , 2013, 2013 International Conference on Biometrics (ICB).

[4]  Binghui Wang,et al.  Stealing Hyperparameters in Machine Learning , 2018, 2018 IEEE Symposium on Security and Privacy (SP).

[5]  Binghui Wang,et al.  Backdoor Attacks to Graph Neural Networks , 2020, ArXiv.

[6]  Amir Houmansadr,et al.  Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning , 2018, 2019 IEEE Symposium on Security and Privacy (SP).

[7]  Reza Shokri,et al.  Machine Learning with Membership Privacy using Adversarial Regularization , 2018, CCS.

[8]  Somesh Jha,et al.  Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures , 2015, CCS.

[9]  Stephan Günnemann,et al.  Adversarial Attacks on Graph Neural Networks , 2019, GI-Jahrestagung.

[10]  David Berthelot,et al.  High Accuracy and High Fidelity Extraction of Neural Networks , 2020, USENIX Security Symposium.

[11]  Samy Bengio,et al.  Understanding deep learning requires rethinking generalization , 2016, ICLR.

[12]  Max Welling,et al.  Semi-Supervised Classification with Graph Convolutional Networks , 2016, ICLR.

[13]  Úlfar Erlingsson,et al.  The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks , 2018, USENIX Security Symposium.

[14]  Yang Zhang,et al.  Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning , 2019, USENIX Security Symposium.

[15]  David Evans,et al.  Evaluating Differentially Private Machine Learning in Practice , 2019, USENIX Security Symposium.

[16]  Liming Zhu,et al.  Adversarial Examples on Graph Data: Deep Insights into Attack and Defense , 2019 .

[17]  Jianping Pan,et al.  Disclose More and Risk Less: Privacy Preserving Online Social Network Data Sharing , 2020, IEEE Transactions on Dependable and Secure Computing.

[18]  Fan Zhang,et al.  Stealing Machine Learning Models via Prediction APIs , 2016, USENIX Security Symposium.

[19]  Le Song,et al.  Adversarial Attack on Graph Structured Data , 2018, ICML.

[20]  Jure Leskovec,et al.  Inductive Representation Learning on Large Graphs , 2017, NIPS.

[21]  Vitaly Shmatikov,et al.  Exploiting Unintended Feature Leakage in Collaborative Learning , 2018, 2019 IEEE Symposium on Security and Privacy (SP).

[22]  Mathias Humbert,et al.  A Survey on Interdependent Privacy , 2019, ACM Comput. Surv..

[23]  Binghui Wang,et al.  Attacking Graph-based Classification via Manipulating the Graph Structure , 2019, CCS.

[24]  Percy Liang,et al.  Certified Defenses for Data Poisoning Attacks , 2017, NIPS.

[25]  Somesh Jha,et al.  Exploring Connections Between Active Learning and Model Extraction , 2018, USENIX Security Symposium.

[26]  Kai Peng,et al.  SocInf: Membership Inference Attacks on Social Media Health Data With Machine Learning , 2019, IEEE Transactions on Computational Social Systems.

[27]  Vitaly Shmatikov,et al.  Membership Inference Attacks Against Machine Learning Models , 2016, 2017 IEEE Symposium on Security and Privacy (SP).

[28]  Soumen Chakrabarti,et al.  Privacy Preserving Link Prediction with Latent Geometric Network Models , 2019, ArXiv.

[29]  Kilian Q. Weinberger,et al.  Simplifying Graph Convolutional Networks , 2019, ICML.

[30]  Michael Backes,et al.  MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples , 2019, CCS.

[31]  Nikita Borisov,et al.  Property Inference Attacks on Fully Connected Neural Networks using Permutation Invariant Representations , 2018, CCS.

[32]  Seong Joon Oh,et al.  Towards Reverse-Engineering Black-Box Neural Networks , 2017, ICLR.

[33]  Somesh Jha,et al.  Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting , 2017, 2018 IEEE 31st Computer Security Foundations Symposium (CSF).

[34]  Minghong Fang,et al.  Local Model Poisoning Attacks to Byzantine-Robust Federated Learning , 2019, USENIX Security Symposium.

[35]  Raghav Bhaskar,et al.  On Inferring Training Data Attributes in Machine Learning Models , 2019, ArXiv.

[36]  Jan Eric Lenssen,et al.  Fast Graph Representation Learning with PyTorch Geometric , 2019, ArXiv.

[37]  Michael P. Wellman,et al.  Towards the Science of Security and Privacy in Machine Learning , 2016, ArXiv.

[38]  Li Wang,et al.  Privacy-Preserving Graph Neural Network for Node Classification , 2020, ArXiv.

[39]  Stephan Günnemann,et al.  Adversarial Attacks on Neural Networks for Graph Data , 2018, KDD.

[40]  Ghazaleh Beigi,et al.  Privacy in Social Media: Identification, Mitigation and Applications , 2018, ArXiv.