Mitigation of a poisoning attack in federated learning by using historical distance detection

The federated learning makes it possible for users to jointly train a model while keeps their data stored locally. It is an original privacy preserving machine learning framework. Meanwhile, there exists availability and integrity threats in the framework. There may be malicious clients pretending the benign ones to interfere global model owning to the local model's indifference at server aggregation side. This behavior is named as poisoning attack, which is generally divided into data poisoning and model poisoning. In this paper, we consider a federated learning scenario with one reliable center server and several clients, where existing malicious clients launching poisoning attack. In the scenario we explore the statistical relationship of Euclidean distance among models, including benign versus benign models and malicious versus benign models. Then based the findings and inspired by evolutionary clustering, we design a defense method to screen possible malicious agents and to mitigate their attack before every round's aggregation. The scheme is implemented at the center server side. In the method our mitigation scheme refers to the detection results of both current round and previous round. Lastly we demonstrate the effectiveness of our scheme through experiments of several different scenarios.