Privacy-Preserving Machine Learning: Methods, Challenges and Directions

Machine learning (ML) is increasingly being adopted in a wide variety of application domains. Usually, a well-performing ML model relies on a large volume of training data and high-powered computational resources. Such a need for and the use of huge volumes of data raise serious privacy concerns because of the potential risks of leakage of highly privacy-sensitive information; further, the evolving regulatory environments that increasingly restrict access to and use of privacy-sensitive data add significant challenges to fully benefiting from the power of ML for data-driven applications. A trained ML model may also be vulnerable to adversarial attacks such as membership, attribute, or property inference attacks and model inversion attacks. Hence, well-designed privacy-preserving ML (PPML) solutions are critically needed for many emerging applications. Increasingly, significant research efforts from both academia and industry can be seen in PPML areas that aim toward integrating privacy-preserving techniques into ML pipeline or specific algorithms, or designing various PPML architectures. In particular, existing PPML research cross-cut ML, systems and applications design, as well as security and privacy areas; hence, there is a critical need to understand state-of-the-art research, related challenges and a research roadmap for future research in PPML area. In this paper, we systematically review and summarize existing privacy-preserving approaches and propose a Phase, Guarantee, and Utility (PGU) triad based model to understand and guide the evaluation of various PPML solutions by decomposing their privacy-preserving functionalities. We discuss the unique characteristics and challenges of PPML and outline possible research directions that leverage as well as benefit multiple research communities such as ML, distributed systems, security and privacy.

[1]  Shobha Venkataraman,et al.  CrypTen: Secure Multi-Party Computation Meets Machine Learning , 2021, NeurIPS.

[2]  Xuefei Yin,et al.  A Comprehensive Survey of Privacy-preserving Federated Learning , 2021, ACM Comput. Surv..

[3]  Zhongshu Gu,et al.  Separation of Powers in Federated Learning (Poster Paper) , 2021, ResilientFL.

[4]  Guosai Wang,et al.  PPCA: Privacy-preserving Principal Component Analysis Using Secure Multiparty Computation(MPC) , 2021, ArXiv.

[5]  M. Annavaram,et al.  Byzantine-Robust and Privacy-Preserving Framework for FedML , 2021, arXiv.org.

[6]  Paarijaat Aditya,et al.  Citadel: Protecting Data Privacy and Model Confidentiality for Collaborative Learning , 2021, SoCC.

[7]  Aseem Rastogi,et al.  SiRnn: A Math Library for Secure RNN Inference , 2021, 2021 IEEE Symposium on Security and Privacy (SP).

[8]  H. Haddadi,et al.  PPFL: privacy-preserving federated learning with trusted execution environments , 2021, MobiSys.

[9]  Balazs Pejo,et al.  Property Inference Attacks on Convolutional Neural Networks: Influence and Implications of Target Model's Complexity , 2021, SECRYPT.

[10]  George J. Pappas,et al.  Encrypted Distributed Lasso for Sparse Data Predictive Control , 2021, 2021 60th IEEE Conference on Decision and Control (CDC).

[11]  David J. Wu,et al.  CryptGPU: Fast Privacy-Preserving Machine Learning on the GPU , 2021, 2021 IEEE Symposium on Security and Privacy (SP).

[12]  Stephan Sigg,et al.  Privacy‐preserving federated learning based on multi‐key homomorphic encryption , 2021, Int. J. Intell. Syst..

[13]  Bo Li,et al.  DataLens: Scalable Privacy Preserving Training via Gradient Compression and Aggregation , 2021, CCS.

[14]  Runhua Xu,et al.  NN-EMD: Efficiently Training Neural Networks Using Encrypted Multi-Sourced Datasets , 2020, IEEE Transactions on Dependable and Secure Computing.

[15]  Heiko Ludwig,et al.  Adaptive Histogram-Based Gradient Boosted Trees for Federated Learning , 2020, ArXiv.

[16]  Dimitrios Papadopoulos,et al.  Mitigating Leakage in Federated Learning with Trusted Hardware , 2020, ArXiv.

[17]  Raluca Ada Popa,et al.  Delphi: A Cryptographic Inference System for Neural Networks , 2020, IACR Cryptol. ePrint Arch..

[18]  Kamalika Chaudhuri,et al.  Revisiting Model-Agnostic Private Learning: Faster Rates and Active Learning , 2020, AISTATS.

[19]  Chenyu Shi,et al.  Secure Collaborative Training and Inference for XGBoost , 2020, PPMLP@CCS.

[20]  Runhua Xu,et al.  Revisiting Secure Computation Using Functional Encryption: Opportunities and Research Directions , 2020, 2020 Second IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA).

[21]  Haomiao Yang,et al.  Efficient and Privacy-Enhanced Federated Learning for Industrial Artificial Intelligence , 2020, IEEE Transactions on Industrial Informatics.

[22]  Kannan Ramchandran,et al.  FastSecAgg: Scalable Secure Aggregation for Privacy-Preserving Federated Learning , 2020, ArXiv.

[23]  Emiliano De Cristofaro,et al.  Toward Robustness and Privacy in Federated Learning: Experimenting with Local and Central Differential Privacy , 2020, ArXiv.

[24]  David Pointcheval,et al.  Dynamic Decentralized Functional Encryption , 2020, IACR Cryptol. ePrint Arch..

[25]  Belhal Karimi,et al.  FedSKETCH: Communication-Efficient and Private Federated Learning via Sketching , 2020, arXiv.org.

[26]  Jonathan Ullman,et al.  Auditing Differentially Private Machine Learning: How Private is Private SGD? , 2020, NeurIPS.

[27]  Tom B. Brown,et al.  Language Models are Few-Shot Learners , 2020, NeurIPS.

[28]  T. Rabin,et al.  Falcon: Honest-Majority Maliciously Secure Framework for Private Deep Learning , 2020, Proc. Priv. Enhancing Technol..

[29]  Michael Moeller,et al.  Inverting Gradients - How easy is it to break privacy in federated learning? , 2020, NeurIPS.

[30]  Han Yu,et al.  Threats to Federated Learning: A Survey , 2020, ArXiv.

[31]  Kai Li,et al.  Privacy-preserving Learning via Deep Net Pruning , 2020, ArXiv.

[32]  Ninghui Li,et al.  Membership Inference Attacks and Defenses in Supervised Learning via Generalization Gap , 2020, ArXiv.

[33]  A. Salman Avestimehr,et al.  Turbo-Aggregate: Breaking the Quadratic Aggregation Barrier in Secure Federated Learning , 2020, IEEE Journal on Selected Areas in Information Theory.

[34]  Bo Zhao,et al.  iDLG: Improved Deep Leakage from Gradients , 2020, ArXiv.

[35]  F. Kerschbaum,et al.  Assessing differentially private deep learning with Membership Inference , 2019, ArXiv.

[36]  Richard Nock,et al.  Advances and Open Problems in Federated Learning , 2019, Found. Trends Mach. Learn..

[37]  Ruby B. Lee,et al.  Model inversion attacks against collaborative inference , 2019, ACSAC.

[38]  Fabrice Benhamouda,et al.  From Single-Input to Multi-Client Inner-Product Functional Encryption , 2019, IACR Cryptol. ePrint Arch..

[39]  Vinod Vaikuntanathan,et al.  Optimal Bounded-Collusion Secure Functional Encryption , 2019, IACR Cryptol. ePrint Arch..

[40]  Geoffrey C. Fox,et al.  Glyph: Fast and Accurately Training Deep Neural Networks on Encrypted Data , 2019, NeurIPS.

[41]  Peter B. Walker,et al.  Federated Learning for Healthcare Informatics , 2019, Journal of Healthcare Informatics Research.

[42]  Nathalie Baracaldo,et al.  HybridAlpha: An Efficient Approach for Privacy-Preserving Federated Learning , 2019, AISec@CCS.

[43]  Yongsoo Song,et al.  Efficient Multi-Key Homomorphic Encryption with Packed Ciphertexts with Application to Oblivious Neural Network Inference , 2019, IACR Cryptol. ePrint Arch..

[44]  Vyas Sekar,et al.  Enhancing the Privacy of Federated Learning with Sketching , 2019, ArXiv.

[45]  Virginia Smith,et al.  Privacy for Free: Communication-Efficient Learning with Differential Privacy Using Sketches , 2019, ArXiv.

[46]  N. Gong,et al.  MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples , 2019, CCS.

[47]  Ameet S. Talwalkar,et al.  Differentially Private Meta-Learning , 2019, ICLR.

[48]  Anit Kumar Sahu,et al.  Federated Learning: Challenges, Methods, and Future Directions , 2019, IEEE Signal Processing Magazine.

[49]  Wei Xu,et al.  PrivPy: General and Scalable Privacy-Preserving Data Mining , 2019, KDD.

[50]  Bingsheng He,et al.  A Survey on Federated Learning Systems: Vision, Hype and Reality for Data Privacy and Protection , 2019, IEEE Transactions on Knowledge and Data Engineering.

[51]  Jie Xu,et al.  The Tradeoff Between Privacy and Accuracy in Anomaly Detection Using Federated XGBoost , 2019, ArXiv.

[52]  Matt J. Kusner,et al.  QUOTIENT: Two-Party Secure Neural Network Training and Prediction , 2019, CCS.

[53]  Varun Gupta,et al.  On the Compatibility of Privacy and Fairness , 2019, UMAP.

[54]  Sharath Pankanti,et al.  Towards Deep Neural Network Training on Encrypted Data , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).

[55]  Song Han,et al.  Deep Leakage from Gradients , 2019, NeurIPS.

[56]  Aseem Rastogi,et al.  EzPC: Programmable and Efficient Secure Two-Party Computation for Machine Learning , 2019, 2019 IEEE European Symposium on Security and Privacy (EuroS&P).

[57]  Vitaly Shmatikov,et al.  Differential Privacy Has Disparate Impact on Model Accuracy , 2019, NeurIPS.

[58]  David Pointcheval,et al.  Partially Encrypted Machine Learning using Functional Encryption , 2019, NeurIPS 2019.

[59]  Raluca A. Popa,et al.  Helen: Maliciously Secure Coopetitive Learning for Linear Models , 2019, 2019 IEEE Symposium on Security and Privacy (SP).

[60]  Brett Hemenway,et al.  SoK: General Purpose Compilers for Secure Multi-Party Computation , 2019, 2019 IEEE Symposium on Security and Privacy (SP).

[61]  Chao Li,et al.  CryptoNN: Training Neural Networks over Encrypted Data , 2019, 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS).

[62]  Markulf Kohlweiss,et al.  Decentralizing Inner-Product Functional Encryption , 2019, IACR Cryptol. ePrint Arch..

[63]  Calton Pu,et al.  Differentially Private Model Publishing for Deep Learning , 2019, 2019 IEEE Symposium on Security and Privacy (SP).

[64]  Cynthia Breazeal,et al.  Machine behaviour , 2019, Nature.

[65]  Justin Hsu,et al.  Data Poisoning against Differentially-Private Learners: Attacks and Defenses , 2019, IJCAI.

[66]  Hassan Takabi,et al.  Deep Neural Networks Classification over Encrypted Data , 2019, CODASPY.

[67]  Farinaz Koushanfar,et al.  XONN: XNOR-based Oblivious Deep Neural Network Inference , 2019, IACR Cryptol. ePrint Arch..

[68]  Qiang Yang,et al.  SecureBoost: A Lossless Federated Learning Framework , 2019, IEEE Intelligent Systems.

[69]  Glenn B. Dietrich,et al.  A Social Network Analysis (SNA) Study On Data Breach Concerns Over Social Media , 2019, HICSS.

[70]  Rui Zhang,et al.  A Hybrid Approach to Privacy-Preserving Federated Learning , 2018, Informatik Spektrum.

[71]  Amir Houmansadr,et al.  Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning , 2018, 2019 IEEE Symposium on Security and Privacy (SP).

[72]  Sebastian Caldas,et al.  LEAF: A Benchmark for Federated Settings , 2018, ArXiv.

[73]  Duong Hieu Phan,et al.  Decentralized Multi-Client Functional Encryption for Inner Product , 2018, IACR Cryptol. ePrint Arch..

[74]  Philip S. Yu,et al.  Private Model Compression via Knowledge Distillation , 2018, AAAI.

[75]  Ann Dooms,et al.  Conditionals in Homomorphic Encryption and Machine Learning Applications , 2018, IACR Cryptol. ePrint Arch..

[76]  Peter Rindal,et al.  ABY3: A Mixed Protocol Framework for Machine Learning , 2018, IACR Cryptol. ePrint Arch..

[77]  Nikita Borisov,et al.  Property Inference Attacks on Fully Connected Neural Networks using Permutation Invariant Representations , 2018, CCS.

[78]  K. Lauter,et al.  Secure Outsourced Matrix Computation and Application to Neural Networks , 2018, CCS.

[79]  Mihaela van der Schaar,et al.  PATE-GAN: Generating Synthetic Data with Differential Privacy Guarantees , 2018, ICLR.

[80]  Jonathan Katz,et al.  Optimizing Authenticated Garbling for Faster Secure Two-Party Computation , 2018, IACR Cryptol. ePrint Arch..

[81]  Michel Abdalla,et al.  Multi-Input Functional Encryption for Inner Products: Function-Hiding Realizations and Constructions without Pairings , 2018, IACR Cryptol. ePrint Arch..

[82]  Walid Saad,et al.  Distributed Federated Learning for Ultra-Reliable Low-Latency Vehicular Communications , 2018, IEEE Transactions on Communications.

[83]  Vitaly Shmatikov,et al.  How To Backdoor Federated Learning , 2018, AISTATS.

[84]  Theo Lynn,et al.  Social media and stock price reaction to data breach announcements: Evidence from US listed companies , 2018, Research in International Business and Finance.

[85]  Michael P. Wellman,et al.  SoK: Security and Privacy in Machine Learning , 2018, 2018 IEEE European Symposium on Security and Privacy (EuroS&P).

[86]  Tudor Dumitras,et al.  Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks , 2018, NeurIPS.

[87]  Boi Faltings,et al.  Generating Artificial Data for Private Deep Learning , 2018, 1803.03148.

[88]  Fei Wang,et al.  Differentially Private Generative Adversarial Network , 2018, ArXiv.

[89]  Úlfar Erlingsson,et al.  Scalable Private Learning with PATE , 2018, ICLR.

[90]  Dan Alistarh,et al.  Model compression via distillation and quantization , 2018, ICLR.

[91]  Reza Shokri,et al.  Machine Learning with Membership Privacy using Adversarial Regularization , 2018, CCS.

[92]  Craig Gentry,et al.  Doing Real Work with FHE: The Case of Logistic Regression , 2018, IACR Cryptol. ePrint Arch..

[93]  Farinaz Koushanfar,et al.  Chameleon: A Hybrid Secure Computation Framework for Machine Learning Applications , 2018, IACR Cryptol. ePrint Arch..

[94]  Tassilo Klein,et al.  Differentially Private Federated Learning: A Client Level Perspective , 2017, ArXiv.

[95]  Paulo Martins,et al.  A Survey on Fully Homomorphic Encryption , 2017, ACM Comput. Surv..

[96]  Jung Hee Cheon,et al.  Homomorphic Encryption for Arithmetic of Approximate Numbers , 2017, ASIACRYPT.

[97]  Heeyoul Kim,et al.  Efficient machine learning over encrypted data with non-interactive communication , 2017, Comput. Stand. Interfaces.

[98]  Richard Nock,et al.  Private federated learning on vertically partitioned data via entity resolution and additively homomorphic encryption , 2017, ArXiv.

[99]  Hassan Takabi,et al.  CryptoDL: Deep Neural Networks over Encrypted Data , 2017, ArXiv.

[100]  Yao Lu,et al.  Oblivious Neural Network Predictions via MiniONN Transformations , 2017, IACR Cryptol. ePrint Arch..

[101]  Sarvar Patel,et al.  Practical Secure Aggregation for Privacy-Preserving Machine Learning , 2017, IACR Cryptol. ePrint Arch..

[102]  Jonathan Katz,et al.  Authenticated Garbling and Efficient Maliciously Secure Two-Party Computation , 2017, CCS.

[103]  Alex J. Malozemoff,et al.  5Gen-C: Multi-input Functional Encryption and Program Obfuscation for Arithmetic Circuits , 2017, CCS.

[104]  Jonathan Katz,et al.  Global-Scale Secure Multiparty Computation , 2017, CCS.

[105]  H. Brendan McMahan,et al.  Learning Differentially Private Recurrent Language Models , 2017, ICLR.

[106]  Brendan Dolan-Gavitt,et al.  BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain , 2017, ArXiv.

[107]  Francisco Javier González-Serrano,et al.  Supervised machine learning using encrypted training data , 2017, International Journal of Information Security.

[108]  Farinaz Koushanfar,et al.  DeepSecure: Scalable Provably-Secure Deep Learning , 2017, 2018 55th ACM/ESDA/IEEE Design Automation Conference (DAC).

[109]  Payman Mohassel,et al.  SecureML: A System for Scalable Privacy-Preserving Machine Learning , 2017, 2017 IEEE Symposium on Security and Privacy (SP).

[110]  Mauro Conti,et al.  A Survey on Homomorphic Encryption Schemes , 2017, ACM Comput. Surv..

[111]  Jin Li,et al.  Privacy-preserving outsourced classification in cloud computing , 2017, Cluster Computing.

[112]  Ilya Mironov,et al.  Rényi Differential Privacy , 2017, 2017 IEEE 30th Computer Security Foundations Symposium (CSF).

[113]  Alex J. Malozemoff,et al.  5Gen: A Framework for Prototyping Applications Using Multilinear Maps and Matrix Branching Programs , 2016, CCS.

[114]  Eran Omri,et al.  Optimizing Semi-Honest Secure Multiparty Computation for the Internet , 2016, IACR Cryptol. ePrint Arch..

[115]  Martín Abadi,et al.  Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data , 2016, ICLR.

[116]  Vitaly Shmatikov,et al.  Membership Inference Attacks Against Machine Learning Models , 2016, 2017 IEEE Symposium on Security and Privacy (SP).

[117]  Peter Richtárik,et al.  Federated Learning: Strategies for Improving Communication Efficiency , 2016, ArXiv.

[118]  Ian Goodfellow,et al.  Deep Learning with Differential Privacy , 2016, CCS.

[119]  Teddy Furon,et al.  Differentially Private Matrix Factorization using Sketching Techniques , 2016, IH&MMSec.

[120]  Ran Gilad-Bachrach,et al.  CryptoNets: Applying Neural Networks to Encrypted Data with High Throughput and Accuracy , 2016, ICML.

[121]  Jeffrey F. Naughton,et al.  A Methodology for Formalizing Model-Inversion Attacks , 2016, 2016 IEEE 29th Computer Security Foundations Symposium (CSF).

[122]  Ahmad-Reza Sadeghi,et al.  CryptoML: Secure outsourcing of big data machine learning applications , 2016, 2016 IEEE International Symposium on Hardware Oriented Security and Trust (HOST).

[123]  Xiang-Yang Li,et al.  De-anonymizing social networks and inferring private attributes using knowledge graphs , 2016, IEEE INFOCOM 2016 - The 35th Annual IEEE International Conference on Computer Communications.

[124]  Blaise Agüera y Arcas,et al.  Communication-Efficient Learning of Deep Networks from Decentralized Data , 2016, AISTATS.

[125]  Allison Bishop,et al.  Function-Hiding Inner Product Encryption , 2015, ASIACRYPT.

[126]  Ananthram Swami,et al.  Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).

[127]  Martine De Cock,et al.  Fast, Privacy Preserving Linear Regression over Distributed Datasets based on Pre-Distributed Data , 2015, AISec@CCS.

[128]  Somesh Jha,et al.  Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures , 2015, CCS.

[129]  Ye Zhang,et al.  Fast and Secure Three-party Computation: The Garbled Circuit Approach , 2015, IACR Cryptol. ePrint Arch..

[130]  Vitaly Shmatikov,et al.  Privacy-preserving deep learning , 2015, 2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[131]  Emiliano De Cristofaro,et al.  Efficient Private Statistics with Succinct Sketches , 2015, NDSS.

[132]  Brent Waters,et al.  A Punctured Programming Approach to Adaptively Secure Functional Encryption , 2015, CRYPTO.

[133]  Xintao Wu,et al.  Regression Model Fitting under Differential Privacy and Model Inversion Attack , 2015, IJCAI.

[134]  Mark Zhandry,et al.  Semantically Secure Order-Revealing Encryption: Multi-input Functional Encryption Without Obfuscation , 2015, EUROCRYPT.

[135]  Angelo De Caro,et al.  Simple Functional Encryption Schemes for Inner Products , 2015, IACR Cryptol. ePrint Arch..

[136]  Masahiro Yagisawa,et al.  Fully Homomorphic Encryption without bootstrapping , 2015, IACR Cryptol. ePrint Arch..

[137]  A. Shriram,et al.  Privacy Preserving Data Mining , 2015, Advances in Information Security.

[138]  Geoffrey E. Hinton,et al.  Distilling the Knowledge in a Neural Network , 2015, ArXiv.

[139]  Michael Zohner,et al.  ABY - A Framework for Efficient Mixed-Protocol Secure Two-Party Computation , 2015, NDSS.

[140]  Kartik Nayak,et al.  Oblivious Data Structures , 2014, IACR Cryptol. ePrint Arch..

[141]  Shai Halevi,et al.  Algorithms in HElib , 2014, CRYPTO.

[142]  Aaron Roth,et al.  The Algorithmic Foundations of Differential Privacy , 2014, Found. Trends Theor. Comput. Sci..

[143]  Amit Sahai,et al.  Multi-Input Functional Encryption , 2014, IACR Cryptol. ePrint Arch..

[144]  Stefano Tessaro,et al.  On the Relationship between Functional Encryption, Obfuscation, and Fully Homomorphic Encryption , 2013, IMACC.

[145]  Brent Waters,et al.  Candidate Indistinguishability Obfuscation and Functional Encryption for all Circuits , 2013, 2013 IEEE 54th Annual Symposium on Foundations of Computer Science.

[146]  Stéphane Bressan,et al.  Publishing trajectories with differential privacy guarantees , 2013, SSDBM.

[147]  Carlos V. Rozas,et al.  Innovative instructions and software model for isolated execution , 2013, HASP '13.

[148]  Stratis Ioannidis,et al.  Privacy-Preserving Ridge Regression on Hundreds of Millions of Records , 2013, 2013 IEEE Symposium on Security and Privacy.

[149]  Michael Naehrig,et al.  ML Confidential: Machine Learning on Encrypted Data , 2012, ICISC.

[150]  Craig Gentry,et al.  Homomorphic Evaluation of the AES Circuit , 2012, IACR Cryptol. ePrint Arch..

[151]  Ivan Damgård,et al.  Multiparty Computation from Somewhat Homomorphic Encryption , 2012, IACR Cryptol. ePrint Arch..

[152]  Frederik Vercauteren,et al.  Fully homomorphic SIMD operations , 2012, Designs, Codes and Cryptography.

[153]  Vinod Vaikuntanathan,et al.  On-the-fly multiparty computation on the cloud via multikey fully homomorphic encryption , 2012, STOC '12.

[154]  Marten van Dijk,et al.  Path ORAM: an extremely simple oblivious RAM protocol , 2012, IACR Cryptol. ePrint Arch..

[155]  Craig Gentry,et al.  (Leveled) fully homomorphic encryption without bootstrapping , 2012, ITCS '12.

[156]  Vinod Vaikuntanathan,et al.  Functional Encryption for Inner Product Predicates from Learning with Errors , 2011, IACR Cryptol. ePrint Arch..

[157]  Benjamin C. M. Fung,et al.  Publishing set-valued data via differential privacy , 2011, Proc. VLDB Endow..

[158]  Brent Waters,et al.  Functional Encryption: Definitions and Challenges , 2011, TCC.

[159]  Blaine Nelson,et al.  The security of machine learning , 2010, Machine Learning.

[160]  Guy N. Rothblum,et al.  Boosting and Differential Privacy , 2010, 2010 IEEE 51st Annual Symposium on Foundations of Computer Science.

[161]  Ahmad-Reza Sadeghi,et al.  TASTY: tool for automating secure two-party computations , 2010, CCS '10.

[162]  Craig Gentry,et al.  Fully Homomorphic Encryption over the Integers , 2010, EUROCRYPT.

[163]  Nuttapong Attrapadung,et al.  Functional Encryption for Inner Product: Achieving Constant-Size Ciphertexts with Adaptive Security or Support for Negation , 2010, Public Key Cryptography.

[164]  Thorsten Holz,et al.  A Practical Attack to De-anonymize Social Network Users , 2010, 2010 IEEE Symposium on Security and Privacy.

[165]  Anil Kumar Understanding Privacy , 2010 .

[166]  Benny Pinkas,et al.  Secure Two-Party Computation is Practical , 2009, IACR Cryptol. ePrint Arch..

[167]  Cynthia Dwork,et al.  Differential privacy and robust statistics , 2009, STOC '09.

[168]  Craig Gentry,et al.  Fully homomorphic encryption using ideal lattices , 2009, STOC '09.

[169]  R. Wolff,et al.  Providing k-anonymity in data mining , 2008, The VLDB Journal.

[170]  Cynthia Dwork,et al.  Differential Privacy: A Survey of Results , 2008, TAMC.

[171]  Yücel Saygin,et al.  Distributed privacy preserving k-means clustering with additive secret sharing , 2008, PAIS '08.

[172]  Rafail Ostrovsky,et al.  Secure two-party k-means clustering , 2007, CCS '07.

[173]  Philip S. Yu,et al.  On Privacy-Preservation of Text and Sparse Binary Data with Sketches , 2007, SDM.

[174]  Ninghui Li,et al.  t-Closeness: Privacy Beyond k-Anonymity and l-Diversity , 2007, 2007 IEEE 23rd International Conference on Data Engineering.

[175]  David J. DeWitt,et al.  Workload-aware anonymization , 2006, KDD '06.

[176]  Daniel Kifer,et al.  Injecting utility into anonymized datasets , 2006, SIGMOD Conference.

[177]  ASHWIN MACHANAVAJJHALA,et al.  L-diversity: privacy beyond k-anonymity , 2006, 22nd International Conference on Data Engineering (ICDE'06).

[178]  Blaine Nelson,et al.  Can machine learning be secure? , 2006, ASIACCS '06.

[179]  Qi Wang,et al.  Random-data perturbation techniques and privacy-preserving data mining , 2005, Knowledge and Information Systems.

[180]  Xiaodong Lin,et al.  Privacy preserving regression modelling via distributed computation , 2004, KDD.

[181]  L. Sweeney,et al.  k-Anonymity: A Model for Protecting Privacy , 2002, Int. J. Uncertain. Fuzziness Knowl. Based Syst..

[182]  Pascal Paillier,et al.  Public-Key Cryptosystems Based on Composite Degree Residuosity Classes , 1999, EUROCRYPT.

[183]  David Chaum,et al.  The dining cryptographers problem: Unconditional sender and recipient untraceability , 1988, Journal of Cryptology.

[184]  Andrew Chi-Chih Yao,et al.  Protocols for secure computations , 1982, 23rd Annual Symposium on Foundations of Computer Science (sfcs 1982).

[185]  David Chaum,et al.  Untraceable electronic mail, return addresses, and digital pseudonyms , 1981, CACM.

[186]  Ion Stoica,et al.  Cerebro: A Platform for Multi-Party Cryptographic Collaborative Learning , 2021, IACR Cryptol. ePrint Arch..

[187]  Raluca Ada Popa,et al.  MUSE: Secure Inference Resilient to Malicious Clients , 2021, IACR Cryptol. ePrint Arch..

[188]  Yang Liu,et al.  BatchCrypt: Efficient Homomorphic Encryption for Cross-Silo Federated Learning , 2020, USENIX ATC.

[189]  Liyue Fan A Survey of Differentially Private Generative Adversarial Networks , 2020 .

[190]  Tal Malkin,et al.  Garbled Neural Networks are Practical , 2019, IACR Cryptol. ePrint Arch..

[191]  Robert Laganière,et al.  Membership Inference Attack against Differentially Private Deep Learning Model , 2018, Trans. Data Priv..

[192]  Constance Morel,et al.  Privacy-Preserving Classification on Deep Neural Network , 2017, IACR Cryptol. ePrint Arch..

[193]  Mariana Raykova,et al.  Secure Linear Regression on Vertically Partitioned Datasets , 2016, IACR Cryptol. ePrint Arch..

[194]  Guigang Zhang,et al.  Deep Learning , 2016, Int. J. Semantic Comput..

[195]  Shafi Goldwasser,et al.  Machine Learning Classification over Encrypted Data , 2015, NDSS.

[196]  S. Fienberg,et al.  Secure multiple linear regression based on homomorphic encryption , 2011 .

[197]  Keith B. Frikken,et al.  Secure Multiparty Computation , 2011, Encyclopedia of Cryptography and Security.