On the Privacy of Optimization

Abstract In distributed or multiparty computations, optimization theory methods offer appealing privacy properties compared to cryptography and differential privacy methods. However, unlike cryptography and differential privacy, optimization methods currently lack a formal quantification of the privacy they can provide. The main contribution of this paper is to propose a quantification of the privacy of a broad class of optimization approaches. The optimization procedures generate a problem’s data ambiguity for an adversarial observer, which thus observes the problem’s data within an uncertainty set. We formally define a one-to-many relation between a given adversarial observed message and an uncertainty set of the problem’s data. Based on the uncertainty set, a privacy measure is then formalized. The properties of the proposed privacy measure are analyzed. The key ideas are illustrated with examples, including localization and average consensus.