Federated Learning with Bayesian Differential Privacy

We consider the problem of reinforcing federated learning with formal privacy guarantees. We propose to employ Bayesian differential privacy, a relaxation of differential privacy for similarly distributed data, to provide sharper privacy loss bounds. We adapt the Bayesian privacy accounting method to the federated setting and suggest multiple improvements for more efficient privacy budgeting at different levels. Our experiments show significant advantage over the state-of-the-art differential privacy bounds for federated learning on image classification tasks, including a medical application, bringing the privacy budget below ε = 1 at the client level, and below ε = 0.1 at the instance level. Lower amounts of noise also benefit the model accuracy and reduce the number of communication rounds.

[1]  T. Oliphant A Bayesian perspective on estimating mean, variance, and standard-deviation from data , 2006 .

[2]  Darakhshan J. Mir Information-Theoretic Foundations of Differential Privacy , 2012, FPS.

[3]  Ilya Mironov,et al.  Rényi Differential Privacy , 2017, 2017 IEEE 30th Computer Security Foundations Symposium (CSF).

[4]  John M. Abowd,et al.  A New Method for Protecting Interrelated Time Series with Bayesian Prior Distributions and Synthetic Data , 2015 .

[5]  Omer Reingold,et al.  Computational Differential Privacy , 2009, CRYPTO.

[6]  École d'été de probabilités de Saint-Flour,et al.  École d'été de probabilités de Saint-Flour XIII - 1983 , 1985 .

[7]  Hubert Eichner,et al.  Towards Federated Learning at Scale: System Design , 2019, SysML.

[8]  Kamalika Chaudhuri,et al.  Renyi Differential Privacy Mechanisms for Posterior Sampling , 2017, NIPS.

[9]  Giuseppe Ateniese,et al.  Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning , 2017, CCS.

[10]  Cynthia Dwork,et al.  Calibrating Noise to Sensitivity in Private Data Analysis , 2006, TCC.

[11]  Vitaly Shmatikov,et al.  Membership Inference Attacks Against Machine Learning Models , 2016, 2017 IEEE Symposium on Security and Privacy (SP).

[12]  Peter Harremoës,et al.  Rényi Divergence and Kullback-Leibler Divergence , 2012, IEEE Transactions on Information Theory.

[13]  Vitaly Shmatikov,et al.  Privacy-preserving deep learning , 2015, 2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[14]  Anand D. Sarwate,et al.  Stochastic gradient descent with differentially private updates , 2013, 2013 IEEE Global Conference on Signal and Information Processing.

[15]  Thomas Steinke,et al.  Composable and versatile privacy via truncated CDP , 2018, STOC.

[16]  Tassilo Klein,et al.  Differentially Private Federated Learning: A Client Level Perspective , 2017, ArXiv.

[17]  Fady Alajaji,et al.  Rényi divergence measures for commonly used univariate continuous distributions , 2013, Inf. Sci..

[18]  Boi Faltings,et al.  Bayesian Differential Privacy for Machine Learning , 2019, ICML.

[19]  Blaise Agüera y Arcas,et al.  Communication-Efficient Learning of Deep Networks from Decentralized Data , 2016, AISTATS.

[20]  Lars Vilhuber,et al.  Differential Privacy Applications to Bayesian and Linear Mixed Model Estimation , 2013, J. Priv. Confidentiality.

[21]  Cynthia Dwork,et al.  Differential Privacy , 2006, ICALP.

[22]  Boi Faltings,et al.  Generating Artificial Data for Private Deep Learning , 2018, 1803.03148.

[23]  Somesh Jha,et al.  Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures , 2015, CCS.

[24]  Yoshua Bengio,et al.  Gradient-based learning applied to document recognition , 1998, Proc. IEEE.

[25]  Úlfar Erlingsson,et al.  Scalable Private Learning with PATE , 2018, ICLR.

[26]  Aaron Roth,et al.  The Algorithmic Foundations of Differential Privacy , 2014, Found. Trends Theor. Comput. Sci..

[27]  Ian Goodfellow,et al.  Deep Learning with Differential Privacy , 2016, CCS.

[28]  D. Aldous Exchangeability and related topics , 1985 .

[29]  Mark Bun A Teaser for Differential Privacy , 2017 .

[30]  Thomas Steinke,et al.  Concentrated Differential Privacy: Simplifications, Extensions, and Lower Bounds , 2016, TCC.

[31]  Lei Ying,et al.  On the relation between identifiability, differential privacy, and mutual-information privacy , 2014, 2014 52nd Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[32]  H. Brendan McMahan,et al.  Learning Differentially Private Recurrent Language Models , 2017, ICLR.

[33]  Martín Abadi,et al.  Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data , 2016, ICLR.

[34]  Li Fei-Fei,et al.  ImageNet: A large-scale hierarchical image database , 2009, CVPR.

[35]  Anand D. Sarwate,et al.  Differentially Private Empirical Risk Minimization , 2009, J. Mach. Learn. Res..

[36]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[37]  Andrew Chi-Chih Yao,et al.  Protocols for secure computations , 1982, FOCS 1982.

[38]  Guy N. Rothblum,et al.  Concentrated Differential Privacy , 2016, ArXiv.

[39]  Anne-Sophie Charest,et al.  On the Meaning and Limits of Empirical Differential Privacy , 2016, J. Priv. Confidentiality.

[40]  Gilles Barthe,et al.  Privacy Amplification by Subsampling: Tight Analyses via Couplings and Divergences , 2018, NeurIPS.