Deep Reinforcement Learning for Backscatter-Aided Data Offloading in Mobile Edge Computing

Wireless network optimization has been becoming very challenging as the problem size and complexity increase tremendously, due to close couplings among network entities with heterogeneous service and resource requirements. By continuously interacting with the environment, DRL provides a mechanism for different network entities to build knowledge and make autonomous decisions to improve network performance. In this article, we first review typical DRL approaches and recent enhancements. We then discuss the applications of DRL for mobile edge computing (MEC), which can be used for user devices to offload computation workload to MEC servers. However, for the low-power user devices, for example, wireless sensors, MEC can be costly as data offloading also consumes high power in RF communications. To balance the energy consumption in local computation and data offloading, we propose a novel hybrid offloading model that exploits the complementary operations of active RF communications and low-power backscatter communications. To maximize the energy efficiency in MEC offloading, the DRL framework is customized to learn the optimal transmission scheduling and workload allocation in two communications technologies. Numerical results show that the hybrid offloading scheme can improve the energy efficiency over 20 percent compared to existing schemes.

[1]  Dmitry Kangin,et al.  On-Policy Trust Region Policy Optimisation with Replay Buffers , 2019, ArXiv.

[2]  Ying-Chang Liang,et al.  Applications of Deep Reinforcement Learning in Communications and Networking: A Survey , 2018, IEEE Communications Surveys & Tutorials.

[3]  Jing Xu,et al.  Backscatter-Aided Cooperative Relay Communications in Wireless-Powered Hybrid Radio Networks , 2019, IEEE Network.

[4]  Yoshiaki Tanaka,et al.  A Deep Reinforcement Learning Based Approach for Cost- and Energy-Aware Multi-Flow Mobile Data Offloading , 2018, IEICE Trans. Commun..

[5]  Zhao Chen,et al.  Decentralized computation offloading for multi-user mobile edge computing: a deep reinforcement learning approach , 2018, EURASIP Journal on Wireless Communications and Networking.

[6]  Shane Legg,et al.  Human-level control through deep reinforcement learning , 2015, Nature.

[7]  Tom Schaul,et al.  Dueling Network Architectures for Deep Reinforcement Learning , 2015, ICML.

[8]  David Silver,et al.  Deep Reinforcement Learning with Double Q-Learning , 2015, AAAI.

[9]  Mehdi Bennis,et al.  Optimized Computation Offloading Performance in Virtual Edge Computing Systems Via Deep Reinforcement Learning , 2018, IEEE Internet of Things Journal.

[10]  Yuval Tassa,et al.  Continuous control with deep reinforcement learning , 2015, ICLR.

[11]  Tom Schaul,et al.  Prioritized Experience Replay , 2015, ICLR.

[12]  Yan Zhang,et al.  Mobile Edge Computing: A Survey , 2018, IEEE Internet of Things Journal.

[13]  Chen-Khong Tham,et al.  A deep reinforcement learning based offloading scheme in ad-hoc mobile clouds , 2018, IEEE INFOCOM 2018 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS).

[14]  Tom Schaul,et al.  Rainbow: Combining Improvements in Deep Reinforcement Learning , 2017, AAAI.

[15]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.