Reinforcement Learning Based Offloading for Realtime Applications in Mobile Edge Computing

Energy consumption is one of the most important issues for mobile devices such as smartphones and laptops. For mobile devices that execute multiple computation-intensive or delay-sensitive applications simultaneously, Mobile Edge Computing (MEC) based offloading provides a promising solution to the energy problem. However, blindly offloading all tasks to MEC servers is not the best choice because transferring a simple task to a MEC server via wireless networks might consume more energy than processing the task locally. In addition, Dynamic Voltage and Frequency Scaling (DVFS) could be utilized to reduce the energy consumption associated with locally processed tasks by appropriately lowering CPU frequency. In this paper, we propose a realtime reinforcement learning based offloading scheme, RRLO, which is based on both MEC-based offloading and DVFS-based energy consumption reduction. Technically, RRLO jointly learns the optimal offloading policy and DVFS-based scheduling method. Depending on the workload and network condition, RRLO not only determines whether a task should be offloaded to a MEC server, but also selects the best DVFS method used to schedule local tasks. Our simulation results indicate that RRLO outperforms the existing MEC-based offloading schemes.

[1]  Guohong Cao,et al.  Energy-Efficient Computation Offloading for Multicore-Based Mobile Devices , 2018, IEEE INFOCOM 2018 - IEEE Conference on Computer Communications.

[2]  Rami G. Melhem,et al.  Power-aware scheduling for periodic real-time tasks , 2004, IEEE Transactions on Computers.

[3]  Ying Jun Zhang,et al.  Deep Reinforcement Learning for Online Computation Offloading in Wireless Powered Mobile-Edge Computing Networks , 2018, IEEE Transactions on Mobile Computing.

[4]  Abhinav Gupta,et al.  Robust Adversarial Reinforcement Learning , 2017, ICML.

[5]  Hado van Hasselt,et al.  Double Q-learning , 2010, NIPS.

[6]  Wenzhong Li,et al.  Efficient Multi-User Computation Offloading for Mobile-Edge Cloud Computing , 2015, IEEE/ACM Transactions on Networking.

[7]  Laurence T. Yang,et al.  Energy-Efficient Scheduling for Real-Time Systems Based on Deep Q-Learning Model , 2019, IEEE Transactions on Sustainable Computing.

[8]  Huimin Yu,et al.  Deep Reinforcement Learning for Offloading and Resource Allocation in Vehicle Edge Computing and Networks , 2019, IEEE Transactions on Vehicular Technology.

[9]  Qingchen Zhang,et al.  Double-Q Learning-Based DVFS for Multi-core Real-Time Systems , 2017, 2017 IEEE International Conference on Internet of Things (iThings) and IEEE Green Computing and Communications (GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data (SmartData).

[10]  Ying-Chang Liang,et al.  Applications of Deep Reinforcement Learning in Communications and Networking: A Survey , 2018, IEEE Communications Surveys & Tutorials.

[11]  Tiejun Lv,et al.  Deep reinforcement learning based computation offloading and resource allocation for MEC , 2018, 2018 IEEE Wireless Communications and Networking Conference (WCNC).

[12]  Zdenek Becvar,et al.  Mobile Edge Computing: A Survey on Architecture and Computation Offloading , 2017, IEEE Communications Surveys & Tutorials.

[13]  Man Lin,et al.  Hybrid DVFS Scheduling for Real-Time Systems Based on Reinforcement Learning , 2017, IEEE Systems Journal.

[14]  Cécile Belleudy,et al.  Hybrid power management in real time embedded systems: an interplay of DVFS and DPM techniques , 2011, Real-Time Systems.

[15]  Mehdi Bennis,et al.  Optimized Computation Offloading Performance in Virtual Edge Computing Systems Via Deep Reinforcement Learning , 2018, IEEE Internet of Things Journal.

[16]  Laurence T. Yang,et al.  A Double Deep Q-Learning Model for Energy-Efficient Edge Scheduling , 2019, IEEE Transactions on Services Computing.

[17]  Owain Evans,et al.  Trial without Error: Towards Safe Reinforcement Learning via Human Intervention , 2017, AAMAS.