Untargeted Poisoning Attack Detection in Federated Learning via Behavior Attestation

Federated Learning (FL) is a paradigm in Machine Learning (ML) that addresses data privacy, security, access rights and access to heterogeneous information issues by training a global model using distributed nodes. Despite its advantages, there is an increased potential for cyberattacks on FL-based ML techniques that can undermine the benefits. Model-poisoning attacks on FL target the availability of the model. The adversarial objective is to disrupt the training. We propose attestedFL, a defense mechanism that monitors the training of individual nodes through state persistence in order to detect a malicious worker. A fine-grained assessment of the history of the worker permits the evaluation of its behavior in time and results in innovative detection strategies. We present three lines of defense that aim at assessing if the worker is reliable by observing if the node is truly training, while advancing towards a goal. Our defense exposes an attacker’s malicious behavior and removes unreliable nodes from the aggregation process so that the FL process converge faster. attestedFL increased the accuracy of the model in different FL settings, under different attacking patterns, and scenarios e.g., attacks performed at different stages of the convergence, colluding attackers, and continuous attacks.

[1]  Yuan Yu,et al.  TensorFlow: A system for large-scale machine learning , 2016, OSDI.

[2]  Vitaly Shmatikov,et al.  How To Backdoor Federated Learning , 2018, AISTATS.

[3]  Blaise Agüera y Arcas,et al.  Communication-Efficient Learning of Deep Networks from Decentralized Data , 2016, AISTATS.

[4]  Chang Liu,et al.  Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning , 2018, 2018 IEEE Symposium on Security and Privacy (SP).

[5]  Peter Richtárik,et al.  Federated Learning: Strategies for Improving Communication Efficiency , 2016, ArXiv.

[6]  Richard Nock,et al.  Advances and Open Problems in Federated Learning , 2021, Found. Trends Mach. Learn..

[7]  Bilal Farooq,et al.  Ensemble Convolutional Neural Networks for Mode Inference in Smartphone Travel Survey , 2019, IEEE Transactions on Intelligent Transportation Systems.

[8]  Prateek Mittal,et al.  Analyzing Federated Learning through an Adversarial Lens , 2018, ICML.

[9]  Lili Su,et al.  Distributed Statistical Machine Learning in Adversarial Settings: Byzantine Gradient Descent , 2019, PERV.

[10]  Ivan Beschastnikh,et al.  Mitigating Sybils in Federated Learning Poisoning , 2018, ArXiv.

[11]  Blaine Nelson,et al.  The security of machine learning , 2010, Machine Learning.

[12]  Siddharth Garg,et al.  BadNets: Evaluating Backdooring Attacks on Deep Neural Networks , 2019, IEEE Access.

[13]  Fabio Roli,et al.  Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization , 2017, AISec@CCS.

[14]  Rachid Guerraoui,et al.  The Hidden Vulnerability of Distributed Learning in Byzantium , 2018, ICML.

[15]  Shouling Ji,et al.  Justinian's GAAvernor: Robust Distributed Learning with Gradient Aggregation Agent , 2020, USENIX Security Symposium.

[16]  Jinyuan Jia,et al.  Local Model Poisoning Attacks to Byzantine-Robust Federated Learning , 2019, USENIX Security Symposium.

[17]  Rachid Guerraoui,et al.  Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent , 2017, NIPS.