Computation Offloading for Edge-Assisted Federated Learning

When applying machine learning techniques to the Internet of things, aggregating massive amount of data seriously reduce the system efficiency. To tackle this challenge, a distributed learning framework called federated learning has been proposed. Due to the parallel training structure, the performance of federated learning suffers from the straggler effect. In this paper, to mitigate the straggler effect, we propose a novel learning scheme, edge-assisted federated learning (EAFL), which utilizes edge computing to reduce the computational burdens for stragglers in federated learning. It enables stragglers to offload partial computation to the edge server, and leverages the server's idle computing power to assist clients in model training. The offloading data size is optimized to minimize the learning delay of the system. Based on the optimized data size, a threshold-based offloading strategy for EAFL is proposed. Moreover, we extend EAFL to a dynamic scenario where clients may be offline after several update rounds. By grouping clients into different sets, we formulate the new EAFL delay optimization problem and derive the corresponding offloading strategy for the dynamic scenario. Simulation results are presented to show that EAFL has lower system delay than the original federated learning scheme.