Dueling Deep Q-Network Learning Based Computing Offloading Scheme for F-RAN

In this paper, we investigate a computing offloading policy for multiple User Equipments (UEs) in Fog Radio Access Networks (F-RANs). Aimming at maximizing the total utility of UEs in dynamically changing wireless environment, we formulate task offloading problem as a mixed integer nonlinear programming (MINP) problem. To solve the nonconvex problem, we first utilize a centralized deep reinforcement learning (DRL) algorithm called Dueling Deep Q-Network (DDQN) to obtain the most appropriate offloading mode for each UE with unknown Channel State Information (CSI). Especially, a pre-processing procedure is initially proposed to reduce the complexity of the DDQN algorithm. Then, combining with the training results of DDQN and the delay requirements of each UE’s task, we obtain the final optimal offloading policy for each UE. Simulation results demonstrate the performance gains of the proposed scheme compared with other existing baseline schemes.