Semi-Online Computational Offloading by Dueling Deep-Q Network for User Behavior Prediction

Task offloading could optimize computational resource utilization in edge computing environments. However, how to assign and offload tasks for different behavior users is an essential problem since the systems dynamic, intelligent application diversity, and user personality. With user behavior prediction, this paper proposes soCoM, a semi-online Computational Offloading Model. We explore the user behaviors in sophisticated action space by reinforcement learning for catching unknown environment information. With Dueling Deep-Q Network, both the prediction accuracy of users’ behaviors and the server load balance are well-considered, while increasing the computational efficiency and decreasing the resource costing. We propose a dynamic simulation environment of edge computing to demonstrate that user behavior is the critical factor for impacting system utilization. As the action space increasing, Dueling DQN performs better than state-of-art DQN and other improved strategies, and also load balance in multiple different server scenario.

[1]  Trevor N. Mudge,et al.  Neurosurgeon: Collaborative Intelligence Between the Cloud and Mobile Edge , 2017, ASPLOS.

[2]  Tiejun Lv,et al.  Deep reinforcement learning based computation offloading and resource allocation for MEC , 2018, 2018 IEEE Wireless Communications and Networking Conference (WCNC).

[3]  Yoshua Bengio,et al.  Drawing and Recognizing Chinese Characters with Recurrent Neural Network , 2016, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[4]  Kaishun Wu,et al.  Adaptive Online Decision Method for Initial Congestion Window in 5G Mobile Edge Computing Using Deep Reinforcement Learning , 2020, IEEE Journal on Selected Areas in Communications.

[5]  Xu Chen,et al.  Learning Driven Computation Offloading for Asymmetrically Informed Edge Computing , 2019, IEEE Transactions on Parallel and Distributed Systems.

[6]  Jaya Prakash Champati,et al.  Semi-Online Algorithms for Computational Task Offloading with Communication Delay , 2017, IEEE Transactions on Parallel and Distributed Systems.

[7]  Der-Jiunn Deng,et al.  Smart Manufacturing Scheduling With Edge Computing Using Multiclass Deep Q Network , 2019, IEEE Transactions on Industrial Informatics.

[8]  Tom Schaul,et al.  Dueling Network Architectures for Deep Reinforcement Learning , 2015, ICML.

[9]  Jianhong Zhou,et al.  Smart Multi-RAT Access Based on Multiagent Reinforcement Learning , 2018, IEEE Transactions on Vehicular Technology.

[10]  Linqi Song,et al.  Minimizing Age of Information With Power Constraints: Multi-User Opportunistic Scheduling in Multi-State Time-Varying Channels , 2020, IEEE Journal on Selected Areas in Communications.

[11]  Feng Xia,et al.  Deep Reinforcement Learning for Vehicular Edge Computing , 2019, ACM Trans. Intell. Syst. Technol..

[12]  Zibin Zheng,et al.  Online Deep Reinforcement Learning for Computation Offloading in Blockchain-Empowered Mobile Edge Computing , 2019, IEEE Transactions on Vehicular Technology.

[13]  Ling Tang,et al.  Multi-User Computation Offloading in Mobile Edge Computing: A Behavioral Perspective , 2018, IEEE Network.

[14]  Ying Jun Zhang,et al.  Deep Reinforcement Learning for Online Computation Offloading in Wireless Powered Mobile-Edge Computing Networks , 2018, IEEE Transactions on Mobile Computing.

[15]  Shuichi Ohno,et al.  Mean Squared Error Analysis of Quantizers With Error Feedback , 2016, IEEE Transactions on Signal Processing.

[16]  Tom Schaul,et al.  Prioritized Experience Replay , 2015, ICLR.

[17]  Shuguang Cui,et al.  Joint offloading and computing optimization in wireless powered mobile-edge computing systems , 2017, 2017 IEEE International Conference on Communications (ICC).

[18]  Dario Pompili,et al.  Collaborative Mobile Edge Computing in 5G Networks: New Paradigms, Scenarios, and Challenges , 2016, IEEE Communications Magazine.

[19]  Min Chen,et al.  Task Offloading for Mobile Edge Computing in Software Defined Ultra-Dense Network , 2018, IEEE Journal on Selected Areas in Communications.

[20]  Wenzhong Li,et al.  Efficient Multi-User Computation Offloading for Mobile-Edge Cloud Computing , 2015, IEEE/ACM Transactions on Networking.

[21]  David Silver,et al.  Deep Reinforcement Learning with Double Q-Learning , 2015, AAAI.

[22]  Bhaskar Krishnamachari,et al.  Deep Reinforcement Learning for Dynamic Multichannel Access in Wireless Networks , 2018, IEEE Transactions on Cognitive Communications and Networking.

[23]  Shane Legg,et al.  Human-level control through deep reinforcement learning , 2015, Nature.

[24]  Gang Feng,et al.  iRAF: A Deep Reinforcement Learning Approach for Collaborative Mobile Edge Computing IoT Networks , 2019, IEEE Internet of Things Journal.

[25]  Huimin Yu,et al.  Deep Reinforcement Learning for Offloading and Resource Allocation in Vehicle Edge Computing and Networks , 2019, IEEE Transactions on Vehicular Technology.

[26]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[27]  Mehdi Bennis,et al.  Optimized Computation Offloading Performance in Virtual Edge Computing Systems Via Deep Reinforcement Learning , 2018, IEEE Internet of Things Journal.

[28]  Carlo Fischione,et al.  Low-Latency Networking: Where Latency Lurks and How to Tame It , 2018, Proceedings of the IEEE.

[29]  Nei Kato,et al.  Smart Resource Allocation for Mobile Edge Computing: A Deep Reinforcement Learning Approach , 2019, IEEE Transactions on Emerging Topics in Computing.

[30]  Feng Qian,et al.  A close examination of performance and power characteristics of 4G LTE networks , 2012, MobiSys '12.