SMig-RL

Service migration is an often-used approach in cloud computing to minimize the access cost by moving the service close to most users. Although it is effective in a certain sense, the service migration in existing research still suffers from some deficiencies in its evolutionary abilities in scalability, sensitivity, and adaptability to effectively react to the dynamically changing environments. This article proposes an evolutionary framework based on deep reinforcement learning for virtual service migration in large-scale mobile cloud centers. To enhance the spatio-temporal sensitivity of the algorithm, we design a scalable reward function for virtual service migration, redefine the input state, and add a Recurrent Neural Network (RNN) to the learning framework. Additionally, in order to enhance the adaptability of the algorithm, we also decompose the action space and exploit the network cost to adjust the number of virtual machine (VMs). The experimental results show that, compared with the existing results, the migration strategy generated by the algorithm can not only significantly reduce the total service cost and achieve the load balancing at the same time, but also address the burst situations with low cost in dynamic environments.

[1]  Wei Zhang,et al.  A Reinforcement Learning-Based Framework for the Generation and Evolution of Adaptation Rules , 2017, 2017 IEEE International Conference on Autonomic Computing (ICAC).

[2]  Jürgen Schmidhuber,et al.  Long Short-Term Memory , 1997, Neural Computation.

[3]  Jun Zhang,et al.  Deadline constrained cloud computing resources scheduling for cost optimization based on dynamic objective genetic algorithm , 2015, 2015 IEEE Congress on Evolutionary Computation (CEC).

[4]  Le Yi Wang,et al.  VCONF: a reinforcement learning approach to virtual machines auto-configuration , 2009, ICAC '09.

[5]  Enda Barrett,et al.  A network aware approach for the scheduling of virtual machine migration during peak loads , 2017, Cluster Computing.

[6]  Wataru Kameyama,et al.  Proactive Content Caching for Mobile Video Utilizing Transportation Systems and Evaluation Through Field Experiments , 2016, IEEE Journal on Selected Areas in Communications.

[7]  Paul J. Werbos,et al.  Backpropagation Through Time: What It Does and How to Do It , 1990, Proc. IEEE.

[8]  Yang Wang,et al.  Service Migrations in the Cloud for Mobile Accesses: A Reinforcement Learning Approach , 2017, 2017 International Conference on Networking, Architecture, and Storage (NAS).

[9]  Shane Legg,et al.  Human-level control through deep reinforcement learning , 2015, Nature.

[10]  Ioannis Stavrakakis,et al.  Scalable service migration in autonomic network environments , 2010, IEEE Journal on Selected Areas in Communications.

[11]  Cheng-Zhong Xu,et al.  URL: A unified reinforcement learning approach for autonomic cloud management , 2012, J. Parallel Distributed Comput..

[12]  Yishay Mansour,et al.  Policy Gradient Methods for Reinforcement Learning with Function Approximation , 1999, NIPS.

[13]  Kin K. Leung,et al.  Dynamic service migration and workload scheduling in edge-clouds , 2015, Perform. Evaluation.

[14]  Alex Graves,et al.  Supervised Sequence Labelling with Recurrent Neural Networks , 2012, Studies in Computational Intelligence.

[15]  Massoud Pedram,et al.  Energy-Efficient Virtual Machine Replication and Placement in a Cloud Computing System , 2012, 2012 IEEE Fifth International Conference on Cloud Computing.