Privacy in Distributed Average Consensus

Abstract Distributed average consensus refers to computing average of inputs held by multiple agents communicating with each other over peer-to-peer network. Cooperation amongst agents is imperative for any distributed average consensus protocol as each agent has to share its input with other agents, which are usually the adjacent(neighboring) agents. That being said, privacy issues could discourage some agents from participating in such protocols. This paper proposes a novel distributed privacy mechanism that preserves privacy of the collection of honest agents’ inputs as long as the colluding semi-honest agents do not form a vertex cut. The proposed privacy mechanism does not alter the average of agents’ inputs, hence it does not provide privacy against what is already lost by knowing the average of the inputs. It poses minimal additional computation and communication costs, requires no alteration of the distributed consensus protocol and promises a highly scalable practical solution for privacy in distributed average consensus. The privacy achieved is quantified using Kullback-Leibler divergence (KL-divergence) and limitations are discussed analytically for two cases; case i) inputs are continuous random variables, and case ii) inputs are discrete random variables.