Certifiably-Robust Federated Adversarial Learning via Randomized Smoothing

Federated learning is an emerging data-private distributed learning framework, which, however, is vulnerable to adversarial attacks. Although several heuristic defenses are proposed to enhance the robustness of federated learning, they do not provide certifiable robustness guarantees. In this paper, we incorporate randomized smoothing techniques into federated adversarial training to enable data-private distributed learning with certifiable robustness to test-time adversarial perturbations. Through comprehensive experiments, we show that such an advanced federated adversarial learning framework can deliver models as robust as those trained by the centralized training. Further, this enables training provably-robust classifiers to2 bounded adversarial perturbations in a distributed setup. We also show that the one-point gradient estimation-based training approach is $2 - 3 \times$ faster than the popular stochastic estimator-based approach without any noticeable certified robustness differences.

[1]  Claude Castelluccia,et al.  Federated Learning in Adversarial Settings , 2020, ArXiv.

[2]  Cheng Chen,et al.  FedCluster: Boosting the Convergence of Federated Learning via Cluster-Cycling , 2020, 2020 IEEE International Conference on Big Data (Big Data).

[3]  Anit Kumar Sahu,et al.  Federated Learning: Challenges, Methods, and Future Directions , 2019, IEEE Signal Processing Magazine.

[4]  Suman Jana,et al.  Certified Robustness to Adversarial Examples with Differential Privacy , 2018, 2019 IEEE Symposium on Security and Privacy (SP).

[5]  Greg Yang,et al.  Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers , 2019, NeurIPS.

[6]  Pramod K. Varshney,et al.  A Primer on Zeroth-Order Optimization in Signal Processing and Machine Learning: Principals, Recent Advances, and Applications , 2020, IEEE Signal Processing Magazine.

[7]  Alex Krizhevsky,et al.  Learning Multiple Layers of Features from Tiny Images , 2009 .

[8]  Joan Bruna,et al.  Intriguing properties of neural networks , 2013, ICLR.

[9]  Peter Richtárik,et al.  Federated Optimization: Distributed Machine Learning for On-Device Intelligence , 2016, ArXiv.

[10]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[11]  ADVERSARIALLY ROBUST FEDERATED LEARNING FOR NEURAL NETWORKS , 2020 .

[12]  J. Zico Kolter,et al.  Certified Adversarial Robustness via Randomized Smoothing , 2019, ICML.

[13]  Pramod K. Varshney,et al.  Anomalous Example Detection in Deep Learning: A Survey , 2020, IEEE Access.

[14]  Blaise Agüera y Arcas,et al.  Communication-Efficient Learning of Deep Networks from Decentralized Data , 2016, AISTATS.

[15]  Jonathon Shlens,et al.  Explaining and Harnessing Adversarial Examples , 2014, ICLR.

[16]  Prateek Mittal,et al.  Analyzing Federated Learning through an Adversarial Lens , 2018, ICML.