An Optimal-Transport-Based Reinforcement Learning Approach for Computation Offloading

With the mass deployment of computing-intensive applications and delay-sensitive applications on end devices, only adequate computing resources can meet differentiated services’ delay requirements. By offloading tasks to cloud servers or edge servers, computation offloading can alleviate computing and storage limitations and reduce delay and energy consumption. However, few of the existing offloading schemes take into consideration the cloud-edge collaboration and the constraint of energy consumption and task dependency. This paper builds a collaborative computation offloading model in cloud and edge computing and formulates a multi-objective optimization problem. Constructed by fusing optimal transport and Policy-Based RL, we propose an Optimal-Transport-Based RL approach to resolve the offloading problem and make the optimal offloading decision for minimizing the overall cost of delay and energy consumption. Simulation results show that the proposed approach can effectively reduce the cost and significantly outperforms existing optimization solutions.

[1]  Mehdi Bennis,et al.  Optimized Computation Offloading Performance in Virtual Edge Computing Systems Via Deep Reinforcement Learning , 2018, IEEE Internet of Things Journal.

[2]  Yan Chen,et al.  Deep Deterministic Policy Gradient (DDPG)-Based Energy Harvesting Wireless Communications , 2019, IEEE Internet of Things Journal.

[3]  Kaibin Huang,et al.  Energy-Efficient Resource Allocation for Mobile-Edge Computation Offloading , 2016, IEEE Transactions on Wireless Communications.

[4]  Weisong Shi,et al.  Edge Computing: Vision and Challenges , 2016, IEEE Internet of Things Journal.

[5]  Laurent Condat,et al.  A Primal–Dual Splitting Method for Convex Optimization Involving Lipschitzian, Proximable and Linear Composite Terms , 2012, Journal of Optimization Theory and Applications.

[6]  Tiejun Lv,et al.  Deep reinforcement learning based computation offloading and resource allocation for MEC , 2018, 2018 IEEE Wireless Communications and Networking Conference (WCNC).

[7]  Weidang Lu,et al.  NOMA Assisted Multi-Task Multi-Access Mobile Edge Computing via Deep Reinforcement Learning for Industrial Internet of Things , 2021, IEEE Transactions on Industrial Informatics.

[8]  Dale Schuurmans,et al.  Algorithm-Directed Exploration for Model-Based Reinforcement Learning in Factored MDPs , 2002, ICML.

[9]  Guoyin Wang,et al.  Sequence Generation with Optimal-Transport-Enhanced Reinforcement Learning , 2020, AAAI.

[10]  Massoud Pedram,et al.  Task Scheduling with Dynamic Voltage and Frequency Scaling for Energy Minimization in the Mobile Cloud Computing Environment , 2015, IEEE Transactions on Services Computing.

[11]  Geoffrey Fox,et al.  Energy-efficient multisite offloading policy using Markov decision process for mobile cloud computing , 2016, Pervasive Mob. Comput..

[12]  Zdenek Becvar,et al.  Mobile Edge Computing: A Survey on Architecture and Computation Offloading , 2017, IEEE Communications Surveys & Tutorials.

[13]  Wenzhong Li,et al.  Efficient Multi-User Computation Offloading for Mobile-Edge Cloud Computing , 2015, IEEE/ACM Transactions on Networking.

[14]  Geyong Min,et al.  Computation Offloading in Multi-Access Edge Computing Using a Deep Sequential Model Based on Reinforcement Learning , 2019, IEEE Communications Magazine.

[15]  Hongyuan Zha,et al.  A Fast Proximal Point Method for Wasserstein Distance , 2018, ArXiv.

[16]  Zhili Wang,et al.  Distributed Edge Computing Offloading Algorithm Based on Deep Reinforcement Learning , 2020, IEEE Access.

[17]  Tony Q. S. Quek,et al.  Offloading in Mobile Edge Computing: Task Allocation and Computational Frequency Scaling , 2017, IEEE Transactions on Communications.

[18]  Marco Cuturi,et al.  Sinkhorn Distances: Lightspeed Computation of Optimal Transport , 2013, NIPS.

[19]  Qiuhui Yang,et al.  Task offloading for directed acyclic graph applications based on edge computing in Industrial Internet , 2020, Inf. Sci..