Towards a Defense against Backdoor Attacks in Continual Federated Learning

in learning (FL) pipelines where training data is sourced from untrusted over long periods of (i.e., continual learning). Preventing such attacks is difficult because defenders in FL do not have access to raw training Moreover, in a phenomenon we call backdoor leakage , models trained continuously eventually suffer from backdoors due to cumulative errors in backdoor defense mechanisms. We propose a novel framework for defending against backdoor attacks in the federated continual learning setting. Our framework trains two models in parallel: a backbone model and a shadow model. The backbone is trained without any defense mechanism to obtain good performance on the main task. The shadow model combines recent ideas from robust covariance estimation-based filters with early-stopping to control the attack success rate even as the data distribution changes. We provide theoretical motivation for this design and show experimentally that our framework significantly improves upon existing defenses against backdoor attacks. . given 10 We set α . We study first homogeneous and and we discuss heterogeneous and in and In our framework, we set the . We let dimensionality reduction and

[1]  Zaïd Harchaoui,et al.  Robust Aggregation for Federated Learning , 2019, IEEE Transactions on Signal Processing.

[2]  Lingjuan Lyu,et al.  Anti-Backdoor Learning: Training Clean Models on Poisoned Data , 2021, NeurIPS.

[3]  Minghao Chen,et al.  CRFL: Certifiably Robust Federated Learning against Backdoor Attacks , 2021, ICML.

[4]  Sewoong Oh,et al.  SPECTRE: Defending Against Backdoor Attacks Using Robust Statistics , 2021, ArXiv.

[5]  Lingjuan Lyu,et al.  Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks , 2021, ICLR.

[6]  Mehdi Bennis,et al.  Opportunities of Federated Learning in Connected, Cooperative, and Automated Industrial Systems , 2021, IEEE Communications Magazine.

[7]  Minhui Xue,et al.  Invisible Backdoor Attacks on Deep Neural Networks Via Steganography and Regularization , 2019, IEEE Transactions on Dependable and Secure Computing.

[8]  J. Liu RVFR: Robust Vertical Federated Learning via Feature Subspace Recovery , 2021 .

[9]  Fengjun Li,et al.  CONTRA: Defending Against Poisoning Attacks in Federated Learning , 2021, ESORICS.

[10]  Mehmet Emre Gursoy,et al.  Data Poisoning Attacks Against Federated Learning Systems , 2020, ESORICS.

[11]  Yunfei Liu,et al.  Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks , 2020, ECCV.

[12]  Ce Zhang,et al.  RAB: Provable Robustness Against Backdoor Attacks , 2020, ArXiv.

[13]  Binghui Wang,et al.  On Certifying Robustness against Backdoor Attacks via Randomized Smoothing , 2020, ArXiv.

[14]  Samet Oymak,et al.  Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks , 2019, AISTATS.

[15]  Sencun Zhu,et al.  Backdoor Embedding in Convolutional Neural Network Models via Invisible Perturbation , 2018, CODASPY.

[16]  Vitaly Shmatikov,et al.  How To Backdoor Federated Learning , 2018, AISTATS.

[17]  Ananda Theertha Suresh,et al.  Can You Really Backdoor Federated Learning? , 2019, ArXiv.

[18]  Aleksander Madry,et al.  Adversarial Examples Are Not Bugs, They Are Features , 2019, NeurIPS.

[19]  Ben Y. Zhao,et al.  Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks , 2019, 2019 IEEE Symposium on Security and Privacy (SP).

[20]  Mauro Barni,et al.  A New Backdoor Attack in CNNS by Training Set Corruption Without Label Poisoning , 2019, 2019 IEEE International Conference on Image Processing (ICIP).

[21]  Dan Boneh,et al.  SentiNet: Detecting Physical Attacks Against Deep Learning Systems , 2018, ArXiv.

[22]  Jerry Li,et al.  Spectral Signatures in Backdoor Attacks , 2018, NeurIPS.

[23]  Kibok Lee,et al.  A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks , 2018, NeurIPS.

[24]  Brendan Dolan-Gavitt,et al.  Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks , 2018, RAID.

[25]  Aleksander Madry,et al.  Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.

[26]  R. Srikant,et al.  Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks , 2017, ICLR.

[27]  Wen-Chuan Lee,et al.  Trojaning Attack on Neural Networks , 2018, NDSS.

[28]  Dawn Xiaodong Song,et al.  Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning , 2017, ArXiv.

[29]  Rachid Guerraoui,et al.  Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent , 2017, NIPS.

[30]  Brendan Dolan-Gavitt,et al.  BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain , 2017, ArXiv.

[31]  Percy Liang,et al.  Certified Defenses for Data Poisoning Attacks , 2017, NIPS.

[32]  Gregory Cohen,et al.  EMNIST: Extending MNIST to handwritten letters , 2017, 2017 International Joint Conference on Neural Networks (IJCNN).

[33]  Yiran Chen,et al.  Generative Poisoning Attack Method Against Neural Networks , 2017, ArXiv.

[34]  Jerry Li,et al.  Being Robust (in High Dimensions) Can Be Practical , 2017, ICML.

[35]  Peter Richtárik,et al.  Federated Optimization: Distributed Machine Learning for On-Device Intelligence , 2016, ArXiv.

[36]  Daniel M. Kane,et al.  Robust Estimators in High Dimensions without the Computational Intractability , 2016, 2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS).

[37]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[38]  Fabio Roli,et al.  Poisoning behavioral malware clustering , 2014, AISec '14.

[39]  Pavel Laskov,et al.  Practical Evasion of a Learning-Based Classifier: A Case Study , 2014, 2014 IEEE Symposium on Security and Privacy.

[40]  Alex Krizhevsky,et al.  Learning Multiple Layers of Features from Tiny Images , 2009 .

[41]  Frederick R. Forst,et al.  On robust estimation of the location parameter , 1980 .

[42]  W. Kahan,et al.  The Rotation of Eigenvectors by a Perturbation. III , 1970 .

[43]  Chandler Davis The rotation of eigenvectors by a perturbation , 1963 .