Integral Privacy for Sampling from Mollifier Densities with Approximation Guarantees

$\varepsilon$-differential privacy is a leading protection setting, focused by design on individual privacy. Many applications, such as in the medical / pharmaceutical domains, would rather posit privacy at a group level, furthermore of unknown size, a setting in which classical budget scaling tricks typically cannot guarantee non-trivial privacy levels. We call this privacy setting integral privacy. In this paper, we study a major problem of machine learning and statistics with related applications in domains cited above that have recently met with substantial press: sampling. Our formal contribution is twofolds: we provide a general theory for sampling to be integrally private, and we show how to achieve integral privacy with guarantees on the approximation of the true (non-private) density. Our theory introduces $\varepsilon$-mollifiers, subsets of densities whose sampling is guaranteed to be integrally private. Guaranteed approximation bounds of the true density are obtained via the boosting theory as it was originally formulated: we learn sufficient statistics in an $\varepsilon$-mollifier of exponential families using classifiers, which brings guaranteed approximation and convergence rates that degrade gracefully with the privacy budget, under weak assumptions. Approximation guarantees cover the mode capture problem. Experimental results against private kernel density estimation and private GANs displays the quality of our results, in particular for high privacy regimes.

[1]  Roksana Boreli,et al.  Applying Differential Privacy to Matrix Factorization , 2015, RecSys.

[2]  W. K. Hastings,et al.  Monte Carlo Sampling Methods Using Markov Chains and Their Applications , 1970 .

[3]  P. Bassanini,et al.  Elliptic Partial Differential Equations of Second Order , 1997 .

[4]  Christos Dimitrakakis,et al.  Robust and Private Bayesian Inference , 2013, ALT.

[5]  Guy N. Rothblum,et al.  Boosting and Differential Privacy , 2010, 2010 IEEE 51st Annual Symposium on Foundations of Computer Science.

[6]  Alexander J. Smola,et al.  Privacy for Free: Posterior Sampling and Stochastic Gradient Monte Carlo , 2015, ICML.

[7]  Yoshua Bengio,et al.  Generative Adversarial Nets , 2014, NIPS.

[8]  Bernhard Schölkopf,et al.  AdaGAN: Boosting Generative Models , 2017, NIPS.

[9]  R. Schapire The Strength of Weak Learnability , 1990, Machine Learning.

[10]  P. J. Green,et al.  Density Estimation for Statistics and Data Analysis , 1987 .

[11]  M. Wainwright Constrained forms of statistical minimax : 1 Computation , communication , and privacy , 2014 .

[12]  Ashwin Machanavajjhala,et al.  Privacy: Theory meets Practice on the Map , 2008, 2008 IEEE 24th International Conference on Data Engineering.

[13]  Benjamin I. P. Rubinstein,et al.  Pain-Free Random Differential Privacy with Sensitivity Sampling , 2017, ICML.

[14]  L. Wasserman,et al.  A Statistical Framework for Differential Privacy , 2008, 0811.2501.

[15]  Boi Faltings,et al.  Generating Differentially Private Datasets Using GANs , 2018, ArXiv.

[16]  Yoram Singer,et al.  Improved Boosting Algorithms Using Confidence-rated Predictions , 1998, COLT' 98.

[17]  Rebecca N. Wright,et al.  Differential privacy: an exploration of the privacy-utility landscape , 2013 .

[18]  Martin J. Wainwright,et al.  Local Privacy and Minimax Bounds: Sharp Rates for Probability Estimation , 2013, NIPS.

[19]  Richard Nock,et al.  On Bregman Voronoi diagrams , 2007, SODA '07.

[20]  N. Metropolis,et al.  Equation of State Calculations by Fast Computing Machines , 1953, Resonance.

[21]  Michael I. Jordan,et al.  A ug 2 01 4 Local Privacy , Data Processing Inequalities , and Minimax Rates , 2018 .

[22]  Fei Wang,et al.  Differentially Private Generative Adversarial Network , 2018, ArXiv.

[23]  Daniel Sheldon,et al.  Differentially Private Bayesian Inference for Exponential Families , 2018, NeurIPS.

[24]  Alex M. Andrew,et al.  Boosting: Foundations and Algorithms , 2012 .

[25]  Shun-ichi Amari,et al.  Methods of information geometry , 2000 .

[26]  Kamalika Chaudhuri,et al.  Renyi Differential Privacy Mechanisms for Posterior Sampling , 2017, NIPS.

[27]  Chao Li,et al.  Group Differential Privacy-Preserving Disclosure of Multi-level Association Graphs , 2017, 2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS).

[28]  Benjamin I. P. Rubinstein,et al.  The Bernstein Mechanism: Function Release under Differential Privacy , 2017, AAAI.

[29]  Salil Vadhan,et al.  17 58 v 3 [ cs . D S ] 1 4 M ar 2 01 4 Faster Algorithms for Privately Releasing Marginals ∗ , 2018 .

[30]  Larry A. Wasserman,et al.  Differential privacy for functions and functional data , 2012, J. Mach. Learn. Res..