Dynamic Client Association for Energy-Aware Hierarchical Federated Learning

Federated learning (FL) has become a promising solution to train a shared model without exchanging local training samples. However, in the traditional cloud-based FL framework, clients suffer from limited energy budget and generate excessive communication overhead on the backbone network. These drawbacks motivate us to propose an energy-aware hierarchical federated learning framework in which the edge servers assist the cloud server to migrate the local models from the clients. Then a joint local computing power control and client association problem is formulated in order to minimize the training loss and the training latency simultaneously under the long-term energy constraints. To solve the problem, we recast it based on the general Lyapunov optimization framework with the instantaneous energy budget. We then propose a heuristic algorithm, which takes the importance of local updates into account, to achieve a suboptimal solution in polynomial time. Numerical results demonstrate that the proposed algorithm can reduce the training latency compared to the scheme with greedy client association and myopic energy control, and improve the learning performance compared to the scheme in which the associated clients transmit their local models with the maximal power.

[1]  Athina P. Petropulu,et al.  A Deep Learning Framework for Optimization of MISO Downlink Beamforming , 2019, IEEE Transactions on Communications.

[2]  Jiandong Li,et al.  Dynamic Joint Resource Optimization for LTE-Advanced Relay Networks , 2013, IEEE Transactions on Wireless Communications.

[3]  Michael J. Neely,et al.  Opportunistic Cooperation in Cognitive Femtocell Networks , 2011, IEEE Journal on Selected Areas in Communications.

[4]  Anuj Kumar,et al.  Active Federated Learning , 2019, ArXiv.

[5]  Dongning Guo,et al.  Scheduling for Cellular Federated Edge Learning With Importance and Channel Awareness , 2020, IEEE Transactions on Wireless Communications.

[6]  H. Vincent Poor,et al.  Update Aware Device Scheduling for Federated Learning at the Wireless Edge , 2020, 2020 IEEE International Symposium on Information Theory (ISIT).

[7]  Yan Zhang,et al.  Federated Learning for Data Privacy Preservation in Vehicular Cyber-Physical Systems , 2020, IEEE Network.

[8]  Walid Saad,et al.  Energy Efficient Federated Learning Over Wireless Communication Networks , 2019, IEEE Transactions on Wireless Communications.

[9]  Blaise Agüera y Arcas,et al.  Communication-Efficient Learning of Deep Networks from Decentralized Data , 2016, AISTATS.

[10]  H. Vincent Poor,et al.  Energy-Efficient Resource Allocation Optimization for Multimedia Heterogeneous Cloud Radio Access Networks , 2016, IEEE Transactions on Multimedia.

[11]  Qiong Wu,et al.  HFEL: Joint Edge Association and Resource Allocation for Cost-Efficient Hierarchical Federated Edge Learning , 2020, IEEE Transactions on Wireless Communications.

[12]  Xiang Li,et al.  On the Convergence of FedAvg on Non-IID Data , 2019, ICLR.

[13]  Mugen Peng,et al.  Joint Optimization of Data Sampling and User Selection for Federated Learning in the Mobile Edge Computing Systems , 2020, 2020 IEEE International Conference on Communications Workshops (ICC Workshops).

[14]  Dario Pompili,et al.  Joint Task Offloading and Resource Allocation for Multi-Server Mobile-Edge Computing Networks , 2017, IEEE Transactions on Vehicular Technology.

[15]  Tony Q. S. Quek,et al.  Multi-Armed Bandit-Based Client Scheduling for Federated Learning , 2020, IEEE Transactions on Wireless Communications.

[16]  Jun Zhang,et al.  Edge-Assisted Hierarchical Federated Learning with Non-IID Data , 2019, ArXiv.