Over-the-air Learning Rate Optimization for Federated Learning

The sixth-generation (6G) wireless communication is expected to support ubiquitous artificial intelligence (AI) applications from the network core to the end devices. The computational intensive AI tasks are traditionally deployed in the server center, which inevitably gives rise to latency and privacy concerns. As a promising framework, federated learning can address the above concerns by featuring distributed learning at the devices and model aggregation at the aggregator. Exploiting over-the-air computing to aggregate these local models further improves communication efficiency compared with the conventional separated-communication-and-computation principle. However, the resulted distortion by the fading and noisy channels is critical as large error may degrade the learning and inference performance. Inspired by the changeable learning rate, we first propose to exploit the dynamic learning rate (DLR) to adapt to the fading channels, which is proved to be capable of further combating the distortion. The problem is formulated in a single-input-multiple-output system with an objective of aggregate error minimization. To address this problem, an iterative method is proposed. Extensive simulation results demonstrate the effectiveness of the proposed DLR in terms of mean squared error performance as well as the testing accuracy on the CIFAR10 dataset.