Comment on “Federated Learning With Differential Privacy: Algorithms and Performance Analysis”

A recent research paper by Wei et al. proposes a differential privacy algorithm in the context of Federated Learning and provides its performance analysis, mainly focusing on proving a convergence bound for the loss function. In this paper, we show that some of the mathematical derivations given in Wei et al. are not valid. Thus the bounds they prove in the paper do not hold for all loss functions. In this work, we give the correct derivation of the best possible local sensitivity bound, which is valid for all loss functions. We also state the modifications in the bounds for global sensitivity and the standard deviation of the Gaussian noise added, both before and after aggregation.

[1]  Lingyang Song,et al.  A Privacy-Preserving Incentive Mechanism for Federated Cloud-Edge Learning , 2021, 2021 IEEE Global Communications Conference (GLOBECOM).

[2]  Lingyang Song,et al.  Privacy-Preserving Incentive Mechanism Design for Federated Cloud-Edge Learning , 2021, IEEE Transactions on Network Science and Engineering.

[3]  Yonina C. Eldar,et al.  Federated Learning: A signal processing perspective , 2021, IEEE Signal Processing Magazine.

[4]  H. Vincent Poor,et al.  Federated Learning With Differential Privacy: Algorithms and Performance Analysis , 2019, IEEE Transactions on Information Forensics and Security.

[5]  J. Brian Gray,et al.  Introduction to Linear Regression Analysis , 2002, Technometrics.