Resource Management at the Network Edge: A Deep Reinforcement Learning Approach

With the advent of edge computing, it is highly recommended to extend some cloud services to the network edge such that the services can be provisioned in the proximity of end users, with better performance efficiency and cost efficiency. Compared to cloud computing, edge computing has high dynamics, and therefore the resources shall be correspondingly managed in an adaptive way. Traditional model-based resource management approaches are limited in practical application due to the involvement of some assumptions or prerequisites. We think it is desirable to introduce a model-free approach that can fit the network dynamics well without any prior knowledge. To this end, we introduce a model-free DRL approach to efficiently manage the resources at the network edge. Following the design principle of DRL, we design and implement a mobility- aware data processing service migration management agent. The experiments show that our agent can automatically learn the user mobility pattern and accordingly control the service migration among the edge servers to minimize the operational cost at runtime. Some potential future research challenges are also presented.

[1]  Jianhong Zhou,et al.  Smart Multi-RAT Access Based on Multiagent Reinforcement Learning , 2018, IEEE Transactions on Vehicular Technology.

[2]  Tiejun Lv,et al.  Deep reinforcement learning based computation offloading and resource allocation for MEC , 2018, 2018 IEEE Wireless Communications and Networking Conference (WCNC).

[3]  Mehdi Dehghan,et al.  A Model-Based Reinforcement Learning Algorithm for Routing in Energy Harvesting Mobile Ad-Hoc Networks , 2017, Wirel. Pers. Commun..

[4]  Min Chen,et al.  Task Offloading for Mobile Edge Computing in Software Defined Ultra-Dense Network , 2018, IEEE Journal on Selected Areas in Communications.

[5]  Min Chen,et al.  Label-less Learning for Traffic Control in an Edge Network , 2018, IEEE Network.

[6]  Bhaskar Krishnamachari,et al.  Deep Reinforcement Learning for Dynamic Multichannel Access in Wireless Networks , 2018, IEEE Transactions on Cognitive Communications and Networking.

[7]  Feng Liu,et al.  AuTO: scaling deep reinforcement learning for datacenter-scale automatic traffic optimization , 2018, SIGCOMM.

[8]  Ivana Podnar Žarko,et al.  Edge Computing Architecture for Mobile Crowdsensing , 2018, IEEE Access.

[9]  Mihaela van der Schaar,et al.  Dynamic Pricing and Energy Consumption Scheduling With Reinforcement Learning , 2016, IEEE Transactions on Smart Grid.

[10]  Setareh Maghsudi,et al.  Mobile Edge Computation Offloading Using Game Theory and Reinforcement Learning , 2017, ArXiv.

[11]  Tobias Weber,et al.  Reinforcement Learning for Energy Harvesting Decode-and-Forward Two-Hop Communications , 2017, IEEE Transactions on Green Communications and Networking.

[12]  Khaled Ben Letaief,et al.  Dynamic Computation Offloading for Mobile-Edge Computing With Energy Harvesting Devices , 2016, IEEE Journal on Selected Areas in Communications.

[13]  Victor C. M. Leung,et al.  Software-Defined Networks with Mobile Edge Computing and Caching for Smart Cities: A Big Data Deep Reinforcement Learning Approach , 2017, IEEE Communications Magazine.

[14]  Hari Balakrishnan,et al.  TCP ex machina: computer-generated congestion control , 2013, SIGCOMM.