Effects of Differential Privacy and Data Skewness on Membership Inference Vulnerability
暂无分享,去创建一个
Wenqi Wei | Mehmet Emre Gursoy | Lei Yu | Ling Liu | Stacey Truex | Ling Liu | Stacey Truex | Wenqi Wei | Lei Yu | M. E. Gursoy
[1] Andreas Haeberlen,et al. Differential Privacy: An Economic Method for Choosing Epsilon , 2014, 2014 IEEE 27th Computer Security Foundations Symposium.
[2] Chris Clifton,et al. How Much Is Enough? Choosing ε for Differential Privacy , 2011, ISC.
[3] Vitaly Shmatikov,et al. Membership Inference Attacks Against Machine Learning Models , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[4] Cynthia Dwork,et al. Differential Privacy: A Survey of Results , 2008, TAMC.
[5] Amos Beimel,et al. Bounds on the sample complexity for private learning and private data release , 2010, Machine Learning.
[6] Wenqi Wei,et al. Demystifying Membership Inference Attacks in Machine Learning as a Service , 2019, IEEE Transactions on Services Computing.
[7] Reza Shokri,et al. Comprehensive Privacy Analysis of Deep Learning: Stand-alone and Federated Learning under Passive and Active White-box Inference Attacks , 2018, ArXiv.
[8] Sarvar Patel,et al. Practical Secure Aggregation for Privacy-Preserving Machine Learning , 2017, IACR Cryptol. ePrint Arch..
[9] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[10] Ananthram Swami,et al. Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples , 2016, ArXiv.
[11] Liu Ling,et al. Deep Neural Network Ensembles Against Deception: Ensemble Diversity, Accuracy and Robustness , 2019 .
[12] François Chollet,et al. Xception: Deep Learning with Depthwise Separable Convolutions , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[13] Cordelia Schmid,et al. White-box vs Black-box: Bayes Optimal Strategies for Membership Inference , 2019, ICML.
[14] Patrick D. McDaniel,et al. Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples , 2016, ArXiv.
[15] Yanjiao Chen,et al. Privacy-Preserving Collaborative Deep Learning With Unreliable Participants , 2020, IEEE Transactions on Information Forensics and Security.
[16] Frank McSherry,et al. Privacy integrated queries: an extensible platform for privacy-preserving data analysis , 2009, SIGMOD Conference.
[17] Rui Zhang,et al. A Hybrid Approach to Privacy-Preserving Federated Learning , 2019, AISec@CCS.
[18] Kamalika Chaudhuri,et al. Renyi Differential Privacy Mechanisms for Posterior Sampling , 2017, NIPS.
[19] Vitaly Shmatikov,et al. Exploiting Unintended Feature Leakage in Collaborative Learning , 2018, 2019 IEEE Symposium on Security and Privacy (SP).
[20] Sofya Raskhodnikova,et al. What Can We Learn Privately? , 2008, 2008 49th Annual IEEE Symposium on Foundations of Computer Science.
[21] Lynn A. Karoly,et al. Health Insurance Portability and Accountability Act of 1996 (HIPAA) Administrative Simplification , 2010, Practice Management Consultant.
[22] Aaron Roth,et al. The Algorithmic Foundations of Differential Privacy , 2014, Found. Trends Theor. Comput. Sci..
[23] Yanzhao Wu,et al. Cross-Layer Strategic Ensemble Defense Against Adversarial Examples , 2019, 2020 International Conference on Computing, Networking and Communications (ICNC).
[24] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[25] Yang Song,et al. Beyond Inferring Class Representatives: User-Level Privacy Leakage From Federated Learning , 2018, IEEE INFOCOM 2019 - IEEE Conference on Computer Communications.
[26] Paul Laskowski,et al. Epsilon Voting: Mechanism Design for Parameter Selection in Differential Privacy , 2018, 2018 IEEE Symposium on Privacy-Aware Computing (PAC).
[27] Suman Jana,et al. Certified Robustness to Adversarial Examples with Differential Privacy , 2018, 2019 IEEE Symposium on Security and Privacy (SP).
[28] Data, an Organisational Asset , 2013 .
[29] Paul Gamble,et al. Robust or Private? Adversarial Training Makes Models More Vulnerable to Privacy Attacks , 2019, ArXiv.
[30] Yanzhao Wu,et al. Deep Neural Network Ensembles Against Deception: Ensemble Diversity, Accuracy and Robustness , 2019, 2019 IEEE 16th International Conference on Mobile Ad Hoc and Sensor Systems (MASS).
[31] Terrance E. Boult,et al. Are Accuracy and Robustness Correlated , 2016, 2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA).
[32] David Evans,et al. Evaluating Differentially Private Machine Learning in Practice , 2019, USENIX Security Symposium.
[33] Somesh Jha,et al. Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting , 2017, 2018 IEEE 31st Computer Security Foundations Symposium (CSF).
[34] Calton Pu,et al. Differentially Private Model Publishing for Deep Learning , 2019, 2019 IEEE Symposium on Security and Privacy (SP).
[35] Robert Laganière,et al. Membership Inference Attack against Differentially Private Deep Learning Model , 2018, Trans. Data Priv..
[36] Giuseppe D'Acquisto,et al. Differential Privacy: An Estimation Theory-Based Method for Choosing Epsilon , 2015, ArXiv.
[37] Prateek Mittal,et al. Privacy Risks of Securing Machine Learning Models against Adversarial Examples , 2019, CCS.
[38] Mario Fritz,et al. ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models , 2018, NDSS.
[39] Emiliano De Cristofaro,et al. LOGAN: Membership Inference Attacks Against Generative Models , 2017, Proc. Priv. Enhancing Technol..
[40] Ian Goodfellow,et al. Deep Learning with Differential Privacy , 2016, CCS.
[41] Yanzhao Wu,et al. Demystifying Learning Rate Policies for High Accuracy Training of Deep Neural Networks , 2019, 2019 IEEE International Conference on Big Data (Big Data).