Toward Reinforcement-Learning-Based Service Deployment of 5G Mobile Edge Computing with Request-Aware Scheduling

5G wireless network technology will not only significantly increase bandwidth but also introduce new features such as mMTC and URLLC. However, high request latency will remain a challenging problem even with 5G due to the massive requests generated by an increasing number of devices that require long travel distances to the services deployed in cloud centers. By pushing the services closer to the edge of the network, edge computing is recognized as a promising technology to reduce latency. However, properly deploying services among resource-constrained edge servers is an unsolved problem. In this article, we propose a deep reinforcement learning approach to preferably deploy the services to the edge servers with consideration of the request patterns and resource constraints of users, which have not been adequately explored. First, the system model and optimization objectives are formulated and investigated. Then the problem is modeled as a Markov decision process and solved using the Dueling-Deep Q-network algorithm. The experimental results, based on the evaluation of real-life mobile wireless datasets, show that this reinforcement learning approach could be applied to patterns of requests and improve performance.

[1]  Thomas F. La Porta,et al.  It's Hard to Share: Joint Service Placement and Request Scheduling in Edge Clouds with Sharable and Non-Sharable Resources , 2018, 2018 IEEE 38th International Conference on Distributed Computing Systems (ICDCS).

[2]  Mohsen Guizani,et al.  Transactions papers a routing-driven Elliptic Curve Cryptography based key management scheme for Heterogeneous Sensor Networks , 2009, IEEE Transactions on Wireless Communications.

[3]  Alex Graves,et al.  Playing Atari with Deep Reinforcement Learning , 2013, ArXiv.

[4]  Xiaojiang Du,et al.  Internet Protocol Television (IPTV): The Killer Application for the Next-Generation Internet , 2007, IEEE Communications Magazine.

[5]  Ching-Hsien Hsu,et al.  Edge server placement in mobile edge computing , 2019, J. Parallel Distributed Comput..

[6]  Min Chen,et al.  Data-Driven Computing and Caching in 5G Networks: Architecture and Delay Analysis , 2018, IEEE Wireless Communications.

[7]  Daniel Grosu,et al.  A Sample Average Approximation-Based Parallel Algorithm for Application Placement in Edge Computing Systems , 2018, 2018 IEEE International Conference on Cloud Engineering (IC2E).

[8]  Tom Schaul,et al.  Dueling Network Architectures for Deep Reinforcement Learning , 2015, ICML.

[9]  Domenico Siracusa,et al.  Cutting Throughput with the Edge: App-Aware Placement in Fog Computing , 2018, 2019 6th IEEE International Conference on Cyber Security and Cloud Computing (CSCloud)/ 2019 5th IEEE International Conference on Edge Computing and Scalable Cloud (EdgeCom).

[10]  Victor C. M. Leung,et al.  Software-Defined Networks with Mobile Edge Computing and Caching for Smart Cities: A Big Data Deep Reinforcement Learning Approach , 2017, IEEE Communications Magazine.

[11]  Xu Chen,et al.  Follow Me at the Edge: Mobility-Aware Dynamic Service Placement for Mobile Edge Computing , 2018, 2018 IEEE/ACM 26th International Symposium on Quality of Service (IWQoS).

[12]  Schahram Dustdar,et al.  Towards QoS-Aware Fog Service Placement , 2017, 2017 IEEE 1st International Conference on Fog and Edge Computing (ICFEC).

[13]  Mario Di Francesco,et al.  Efficient placement of edge computing devices for vehicular applications in smart cities , 2018, NOMS 2018 - 2018 IEEE/IFIP Network Operations and Management Symposium.

[14]  Leandros Tassiulas,et al.  Q-Placement: Reinforcement-Learning-Based Service Placement in Software-Defined Networks , 2018, 2018 IEEE 38th International Conference on Distributed Computing Systems (ICDCS).

[15]  Hwee Pink Tan,et al.  Markov Decision Processes With Applications in Wireless Sensor Networks: A Survey , 2015, IEEE Communications Surveys & Tutorials.