Abstract In distributed or multiparty computations, optimization theory methods offer appealing privacy properties compared to cryptography and differential privacy methods. However, unlike cryptography and differential privacy, optimization methods currently lack a formal quantification of the privacy they can provide. The main contribution of this paper is to propose a quantification of the privacy of a broad class of optimization approaches. The optimization procedures generate a problem’s data ambiguity for an adversarial observer, which thus observes the problem’s data within an uncertainty set. We formally define a one-to-many relation between a given adversarial observed message and an uncertainty set of the problem’s data. Based on the uncertainty set, a privacy measure is then formalized. The properties of the proposed privacy measure are analyzed. The key ideas are illustrated with examples, including localization and average consensus.
[1]
Anand D. Sarwate,et al.
Differentially Private Empirical Risk Minimization
,
2009,
J. Mach. Learn. Res..
[2]
Glenn Fung,et al.
Privacy-preserving linear and nonlinear approximation via linear programming
,
2013,
Optim. Methods Softw..
[3]
Olvi L. Mangasarian.
Privacy-preserving linear programming
,
2011,
Optim. Lett..
[4]
Olvi L. Mangasarian.
Privacy-preserving horizontally partitioned linear programs
,
2012,
Optim. Lett..
[5]
Daniel Kifer,et al.
Private Convex Empirical Risk Minimization and High-dimensional Regression
,
2012,
COLT 2012.