Reputation-enabled Federated Learning Model Aggregation in Mobile Platforms

Federated Learning (FL) builds on a mobile network of participating nodes that train local models and contribute to the learning model parameters at a central server without being obliged to share their raw data. The server aggregates the uploaded model parameters to generate a global model. Common practice for the uploaded local models is an evenly weighted aggregation, assuming that each node of the network contributes to advancing the global model equally. Due to the heterogeneous nature of the devices and collected data, it is inevitable to have variations between the contributions of the users to the global model. Therefore, users (i.e., devices) with higher contributions should be weighted higher during aggregation. With this in mind, this paper proposes a reputation-enabled aggregation methodology that scales the aggregation weights of users by their reputation scores. Reputation score of a user is computed according to the performance metrics of their trained local models during each training round, therefore it can be a metric to evaluate the direct contributions of their trained local model. Numerical comparison of the proposed aggregation methodology to a baseline that utilizes standard averaging as well as a second baseline that is scoped to a reputation-based client selection shows an improvement of 17.175% over the standard baseline for not independent and identically distributed (non-IID) scenarios for an FL network of 100 participants. Consistent improvements over the first and second baselines under smaller FL networks with users ranging from 20 to 100 are also shown.