Learning Numeric Optimal Differentially Private Truncated Additive Mechanisms

Differentially private (DP) mechanisms face the challenge of providing accurate results while protecting their inputs: the privacy-utility trade-off. A simple but powerful technique for DP adds noise to sensitivity-bounded query outputs to blur the exact query output: additive mechanisms. While a vast body of work considers infinitely wide noise distributions, some applications (e.g., real-time operating systems) require hard bounds on the deviations from the real query, and only limited work on such mechanisms exist. An additive mechanism with truncated noise (i.e., with bounded range) can offer such hard bounds. We introduce a gradient-descent-based tool to learn truncated noise for additive mechanisms with strong utility bounds while simultaneously optimizing for differential privacy under sequential composition, i.e., scenarios where multiple noisy queries on the same data are revealed. Our method can learn discrete noise patterns and not only hyper-parameters of a predefined probability distribution. For sensitivity bounded mechanisms, we show that it is sufficient to consider symmetric and that, for from the mean monotonically falling noise, ensuring privacy for a pair of representative query outputs guarantees privacy for all pairs of inputs (that differ in one element). We find that the utility-privacy trade-off curves of our generated noise are remarkably close to truncated Gaussians and even replicate their shape for l2 utility-loss. For a low number of compositions, we also improved DP-SGD (sub-sampling). Moreover, we extend Moments Accountant to truncated distributions, allowing to incorporate mechanism output events with varying input-dependent zero occurrence probability.

[1]  David M. Sommer,et al.  Privacy Loss Classes: The Central Limit Theorem in Differential Privacy , 2019, IACR Cryptol. ePrint Arch..

[2]  Thomas Steinke,et al.  Concentrated Differential Privacy: Simplifications, Extensions, and Lower Bounds , 2016, TCC.

[3]  Ian Goodfellow,et al.  Deep Learning with Differential Privacy , 2016, CCS.

[4]  Mohit Kumar,et al.  Deriving an Optimal Noise Adding Mechanism for Privacy-Preserving Machine Learning , 2019, DEXA Workshops.

[5]  Esfandiar Mohammadi,et al.  Tight on Budget?: Tight Bounds for r-Fold Approximate Differential Privacy , 2018, CCS.

[6]  Pramod Viswanath,et al.  Optimal Noise Adding Mechanisms for Approximate Differential Privacy , 2016, IEEE Transactions on Information Theory.

[7]  Mukund Sundararajan,et al.  Universally optimal privacy mechanisms for minimax agents , 2010, PODS '10.

[8]  Pramod Viswanath,et al.  The Optimal Noise-Adding Mechanism in Differential Privacy , 2012, IEEE Transactions on Information Theory.

[9]  Nickolai Zeldovich,et al.  Stadium: A Distributed Metadata-Private Messaging System , 2017, IACR Cryptol. ePrint Arch..

[10]  Antti Honkela,et al.  Computing Differential Privacy Guarantees for Heterogeneous Compositions Using FFT , 2021, ArXiv.

[11]  Nickolai Zeldovich,et al.  Karaoke: Distributed Private Messaging Immune to Passive Traffic Analysis , 2018, OSDI.

[12]  Pramod Viswanath,et al.  The Staircase Mechanism in Differential Privacy , 2015, IEEE Journal of Selected Topics in Signal Processing.

[13]  Ryan M. Rogers,et al.  Bounding, Concentrating, and Truncating: Unifying Privacy Loss Composition for Data Analytics , 2020, ALT.

[14]  Natalia Gimelshein,et al.  PyTorch: An Imperative Style, High-Performance Deep Learning Library , 2019, NeurIPS.

[15]  Spyros Antonatos,et al.  The Bounded Laplace Mechanism in Differential Privacy , 2018, J. Priv. Confidentiality.

[16]  Sanjiv Kumar,et al.  Optimal Noise-Adding Mechanism in Additive Differential Privacy , 2018, AISTATS.

[17]  Irit Dinur,et al.  Revealing information while preserving privacy , 2003, PODS.

[18]  Tim Roughgarden,et al.  Universally utility-maximizing privacy mechanisms , 2008, STOC '09.

[19]  Cynthia Dwork,et al.  Calibrating Noise to Sensitivity in Private Data Analysis , 2006, TCC.

[20]  Aaron Roth,et al.  The Algorithmic Foundations of Differential Privacy , 2014, Found. Trends Theor. Comput. Sci..

[21]  Pramod Viswanath,et al.  The Composition Theorem for Differential Privacy , 2013, IEEE Transactions on Information Theory.

[22]  Nickolai Zeldovich,et al.  Vuvuzela: scalable private messaging resistant to traffic analysis , 2015, SOSP.

[23]  Wei Ding,et al.  Tight Analysis of Privacy and Utility Tradeoff in Approximate Differential Privacy , 2018, AISTATS.

[24]  Vitaly Shmatikov,et al.  How To Break Anonymity of the Netflix Prize Dataset , 2006, ArXiv.

[25]  Salil P. Vadhan,et al.  The Complexity of Computing the Optimal Composition of Differential Privacy , 2015, IACR Cryptol. ePrint Arch..

[26]  Moni Naor,et al.  Our Data, Ourselves: Privacy Via Distributed Noise Generation , 2006, EUROCRYPT.

[27]  Differential privacy with partial knowledge. , 2019, 1905.00650.

[28]  Thomas Steinke,et al.  Composable and versatile privacy via truncated CDP , 2018, STOC.

[29]  Ryan M. Rogers,et al.  Optimal Differential Privacy Composition for Exponential Mechanisms , 2020, ICML.

[30]  Josep Domingo-Ferrer,et al.  Optimal data-independent noise for differential privacy , 2013, Inf. Sci..

[31]  Yu-Xiang Wang,et al.  Improving the Gaussian Mechanism for Differential Privacy: Analytical Calibration and Optimal Denoising , 2018, ICML.

[32]  Ilya Mironov,et al.  Rényi Differential Privacy , 2017, 2017 IEEE 30th Computer Security Foundations Symposium (CSF).

[33]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[34]  Guy N. Rothblum,et al.  Boosting and Differential Privacy , 2010, 2010 IEEE 51st Annual Symposium on Foundations of Computer Science.