A reinforcement learning-based computing offloading and resource allocation scheme in F-RAN

This paper investigates a computing offloading policy and the allocation of computational resource for multiple user equipments (UEs) in Device-to-Device (D2D) aided fog radio access networks (F-RANs). Concerning the dynamically changing wireless environment where the channel state information (CSI) is difficult to predict and know exactly, we formulate the problem of task offloading and resource optimization as a mixed-integer nonlinear programming problem to maximize the total utility of all UEs. Concerning the non-convex property of the formulated problem, we decouple the original problem into two phases to solve. Firstly, a centralized deep reinforcement learning (DRL) algorithm called Dueling Deep Q-Network (DDQN) is utilized to obtain the most suitable offloading mode for each UE. Particularly, to reduce the complexity of the proposed offloading scheme based DDQN algorithm, a pre-processing procedure is adopted. Then a distributed Deep Q-Network (DQN)algorithm based on the training result of the DDQN algorithm is further proposed to allocate the appropriate computational resource for each UE. Combining these two phases, the optimal offloading policy and resource allocation for each UE are finally achieved. Simulation results demonstrate the performance gains of the proposed scheme compared with other existing baseline schemes.

[1]  Matti Latva-aho,et al.  Key drivers and research challenges for 6G ubiquitous wireless intelligence , 2019 .

[2]  Hossam S. Hassanein,et al.  Cloud-Assisted Computation Offloading to Support Mobile Services , 2016, IEEE Transactions on Cloud Computing.

[3]  Yifei Wei,et al.  Deep Q-Learning Based Computation Offloading Strategy for Mobile Edge Computing , 2019, Computers, Materials & Continua.

[4]  Mehdi Bennis,et al.  Optimized Computation Offloading Performance in Virtual Edge Computing Systems Via Deep Reinforcement Learning , 2018, IEEE Internet of Things Journal.

[5]  Mugen Peng,et al.  A Multi-Stage Stochastic Programming-Based Offloading Policy for Fog Enabled IoT-eHealth , 2021, IEEE Journal on Selected Areas in Communications.

[6]  Xu Chen,et al.  When D2D meets cloud: Hybrid mobile task offloadings in fog computing , 2017, 2017 IEEE International Conference on Communications (ICC).

[7]  Fan Jiang,et al.  Dueling Deep Q-Network Learning Based Computing Offloading Scheme for F-RAN , 2020, 2020 IEEE 31st Annual International Symposium on Personal, Indoor and Mobile Radio Communications.

[8]  Jian Wang,et al.  Joint Optimization of Data Offloading and Resource Allocation With Renewable Energy Aware for IoT Devices: A Deep Reinforcement Learning Approach , 2019, IEEE Access.

[9]  Pingzhi Fan,et al.  Multi-user Multi-channel Computation Offloading and Resource Allocation for Mobile Edge Computing , 2020, ICC 2020 - 2020 IEEE International Conference on Communications (ICC).

[10]  Konstantin E. Samouylov,et al.  An Online Learning Approach to Computation Offloading in Dynamic Fog Networks , 2021, IEEE Internet of Things Journal.

[11]  Qiuping Li,et al.  Energy-efficient computation offloading and resource allocation in fog computing for Internet of Everything , 2019, China Communications.

[12]  Jian-Jun Yang,et al.  Research on Adaptive Job Shop Scheduling Problems Based on Dueling Double DQN , 2020, IEEE Access.

[13]  Yibo Zhang,et al.  Task Caching, Offloading, and Resource Allocation in D2D-Aided Fog Computing Networks , 2019, IEEE Access.

[14]  Hikaru Sasaki,et al.  A study on vision-based mobile robot learning by deep Q-network , 2017, 2017 56th Annual Conference of the Society of Instrument and Control Engineers of Japan (SICE).

[15]  Daniele Tarchi,et al.  Mobile Edge Computing Partial Offloading Techniques for Mobile Urban Scenarios , 2018, 2018 IEEE Global Communications Conference (GLOBECOM).

[16]  Tom Schaul,et al.  Dueling Network Architectures for Deep Reinforcement Learning , 2015, ICML.

[17]  Chin-Ling Chen,et al.  Semi-Online Computational Offloading by Dueling Deep-Q Network for User Behavior Prediction , 2020, IEEE Access.

[18]  Georges Kaddoum,et al.  Heterogeneous Task Offloading and Resource Allocations via Deep Recurrent Reinforcement Learning in Partial Observable Multifog Networks , 2020, IEEE Internet of Things Journal.

[19]  Jinshu Su,et al.  Dynamic Edge Computation Offloading for Internet of Things With Energy Harvesting: A Learning Method , 2019, IEEE Internet of Things Journal.

[20]  Peter Kilpatrick,et al.  Modelling Fog Offloading Performance , 2020, 2020 IEEE 4th International Conference on Fog and Edge Computing (ICFEC).

[21]  Zhili Wang,et al.  Distributed Edge Computing Offloading Algorithm Based on Deep Reinforcement Learning , 2020, IEEE Access.

[22]  Changyin Sun,et al.  Deep Q-Learning-Based Content Caching With Update Strategy for Fog Radio Access Networks , 2019, IEEE Access.

[23]  White Paper 5G Evolution and 6G , 2020 .

[24]  Weiwei Xia,et al.  Joint Computation Offloading and Resource Allocation Optimization in Heterogeneous Networks With Mobile Edge Computing , 2018, IEEE Access.

[25]  T Praveena,et al.  Distributed Deep Reinforcement Learning using TensorFlow , 2017, 2017 International Conference on Current Trends in Computer, Electrical, Electronics and Communication (CTCEEC).

[26]  Chen-Khong Tham,et al.  A deep reinforcement learning based offloading scheme in ad-hoc mobile clouds , 2018, IEEE INFOCOM 2018 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS).

[27]  Chih-Wei Huang,et al.  Joint Demand Forecasting and DQN-Based Control for Energy-Aware Mobile Traffic Offloading , 2020, IEEE Access.

[28]  Ming Tang,et al.  Deep Reinforcement Learning for Task Offloading in Mobile Edge Computing Systems , 2020, IEEE Transactions on Mobile Computing.

[29]  Dongtang Ma,et al.  Joint Allocation on Communication and Computing Resources for Fog Radio Access Networks , 2020, IEEE Access.

[30]  Wei Liu,et al.  Q-Learning Based Task Offloading and Resource Allocation Scheme for Internet of Vehicles , 2020, 2020 IEEE/CIC International Conference on Communications in China (ICCC).

[31]  Yi Ouyang Task offloading algorithm of vehicle edge computing environment based on Dueling-DQN , 2021 .

[32]  Ying Huang,et al.  V-D D3QN: the Variant of Double Deep Q-Learning Network with Dueling Architecture , 2018, 2018 37th Chinese Control Conference (CCC).

[33]  Chunguo Wu,et al.  Cooperative Deep Q-Learning With Q-Value Transfer for Multi-Intersection Signal Control , 2019, IEEE Access.

[34]  Min Dong,et al.  Multi-User Multi-Task Offloading and Resource Allocation in Mobile Cloud Systems , 2018, IEEE Transactions on Wireless Communications.