Efficient Client Contribution Evaluation for Horizontal Federated Learning

In federated learning (FL), fair and accurate measurement of the contribution of each federated participant is of great significance. The level of contribution not only provides a rational metric for distributing financial benefits among federated participants, but also helps to discover malicious participants that try to poison the FL framework. Previous methods for contribution measurement were based on enumeration over possible combination of federated participants. Their computation costs increase drastically with the number of participants or feature dimensions, making them inapplicable in practical situations. In this paper an efficient method is proposed to evaluate the contributions of federated participants. This paper focuses on the horizontal FL framework, where client servers calculate parameter gradients over their local data, and upload the gradients to the central server. Before aggregating the client gradients, the central server train a data value estimator of the gradients using reinforcement learning techniques. As shown by experimental results, the proposed method consistently outperforms the conventional leave-one-out method in terms of valuation authenticity as well as time complexity.

[1]  Sercan O. Arik,et al.  Data Valuation using Reinforcement Learning , 2019, ICML.

[2]  Richard Nock,et al.  Advances and Open Problems in Federated Learning , 2021, Found. Trends Mach. Learn..

[3]  Yishay Mansour,et al.  Policy Gradient Methods for Reinforcement Learning with Function Approximation , 1999, NIPS.

[4]  Yang Yang,et al.  Deep Learning Scaling is Predictable, Empirically , 2017, ArXiv.

[5]  Hongyu Li,et al.  Knowledge Federation: A Unified and Hierarchical Privacy-Preserving AI Framework , 2020, 2020 IEEE International Conference on Knowledge Graph (ICKG).

[6]  Jing Xiao,et al.  FedSmart: An Auto Updating Federated Learning Optimization Mechanism , 2020, APWeb/WAIM.

[7]  Jing Xiao,et al.  Empirical Studies of Institutional Federated Learning For Natural Language Processing , 2020, FINDINGS.

[8]  Deze Zeng,et al.  A Learning-Based Incentive Mechanism for Federated Learning , 2020, IEEE Internet of Things Journal.

[9]  Percy Liang,et al.  Understanding Black-box Predictions via Influence Functions , 2017, ICML.

[10]  Peter Richtárik,et al.  Federated Learning: Strategies for Improving Communication Efficiency , 2016, ArXiv.

[11]  James Y. Zou,et al.  Data Shapley: Equitable Valuation of Data for Machine Learning , 2019, ICML.

[12]  Jakub Konecný,et al.  Federated Optimization: Distributed Optimization Beyond the Datacenter , 2015, ArXiv.

[13]  Jianzong Wang,et al.  Network Coding for Federated Learning Systems , 2020, ICONIP.

[14]  Antonio Robles-Kelly,et al.  Hierarchically Fair Federated Learning , 2020, ArXiv.

[15]  Ziye Zhou,et al.  Measure Contribution of Participants in Federated Learning , 2019, 2019 IEEE International Conference on Big Data (Big Data).

[16]  Shane Legg,et al.  Human-level control through deep reinforcement learning , 2015, Nature.

[17]  Jing Xiao,et al.  Federated Learning of Unsegmented Chinese Text Recognition Model , 2019, 2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI).

[18]  Akebo Yamakami,et al.  Contributions to the study of SMS spam filtering: new collection and results , 2011, DocEng '11.

[19]  Han Yu,et al.  FedCoin: A Peer-to-Peer Payment System for Federated Learning , 2020, Federated Learning.

[20]  Demis Hassabis,et al.  Mastering the game of Go with deep neural networks and tree search , 2016, Nature.

[21]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.