Demand-Driven Deep Reinforcement Learning for Scalable Fog and Service Placement

The increasing number of Internet of Things (IoT) devices necessitates the need for a more substantial fog computing infrastructure to support the users' demand for services. In this context, the placement problem consists of selecting fog resources and mapping services to these resources. This problem is particularly challenging due to the dynamic changes in both users' demand and available fog resources. Existing solutions utilize on-demand fog formation and periodic container placement using heuristics due to the NP-hardness of the problem. Unfortunately, constant updates of services are time consuming in terms of environment setup, especially when required services and available fog nodes are changing. Therefore, due to the need for fast and proactive service updates to meet users demand, and the complexity of the container placement problem, we propose in this paper a Deep Reinforcement Learning (DRL) solution, named Intelligent Fog and Service Placement (IFSP), to perform instantaneous placement decisions proactively. The DRL-based IFSP is developed through a scalable Markov Decision Process (MDP) design. To address the long learning time for DRL to converge, and the high volume of errors needed to explore, we also propose a novel end-to-end architecture utilizing a service scheduler and a bootstrapper.

[1]  Wissam Fawaz Effect of non-cooperative vehicles on path connectivity in vehicular networks: A theoretical analysis and UAV-based remedy , 2018, Veh. Commun..

[2]  Bandar Aldawsari,et al.  Cloud-SEnergy: A bin-packing based multi-cloud service broker for energy efficient composition and execution of data-intensive applications , 2018, Sustain. Comput. Informatics Syst..

[3]  Zaher M. Kassas,et al.  A machine learning approach for localization in cellular environments , 2018, 2018 IEEE/ION Position, Location and Navigation Symposium (PLANS).

[4]  Benjamín Barán,et al.  Many-Objective Virtual Machine Placement , 2017, Journal of Grid Computing.

[5]  Dirk Merkel,et al.  Docker: lightweight Linux containers for consistent development and deployment , 2014 .

[6]  Azzam Mourad,et al.  Vehicular-OBUs-As-On-Demand-Fogs: Resource and Context Aware Deployment of Containerized Micro-Services , 2020, IEEE/ACM Transactions on Networking.

[7]  Azzam Mourad,et al.  Reputation-Based Cooperative Detection Model of Selfish Nodes in Cluster-Based QoS-OLSR Protocol , 2014, Wirel. Pers. Commun..

[8]  Georges Kaddoum,et al.  Managing Fog Networks using Reinforcement Learning Based Load Balancing Algorithm , 2019, 2019 IEEE Wireless Communications and Networking Conference (WCNC).

[9]  Wei Li,et al.  A Dynamic Service Migration Mechanism in Edge Cognitive Computing , 2018, ACM Trans. Internet Techn..

[10]  Chadi Assi,et al.  Dynamic Task Offloading and Scheduling for Low-Latency IoT Services in Multi-Access Edge Computing , 2019, IEEE Journal on Selected Areas in Communications.

[11]  Deepak Vohra Kubernetes Microservices with Docker , 2016, Apress.

[12]  Rajkumar Buyya,et al.  Fog Computing: Helping the Internet of Things Realize Its Potential , 2016, Computer.

[13]  Mugen Peng,et al.  Deep Reinforcement Learning-Based Mode Selection and Resource Management for Green Fog Radio Access Networks , 2018, IEEE Internet of Things Journal.

[14]  Azzam Mourad,et al.  Reinforcement R-learning model for time scheduling of on-demand fog placement , 2019, The Journal of Supercomputing.

[15]  Dong Shen,et al.  Multidimensional Gains for Stochastic Approximation , 2020, IEEE Transactions on Neural Networks and Learning Systems.

[16]  Azzam Mourad,et al.  A novel on-demand vehicular sensing framework for traffic condition monitoring , 2018, Veh. Commun..

[17]  Maria Rita Palattella,et al.  Internet of Things in the 5G Era: Enablers, Architecture, and Business Models , 2016, IEEE Journal on Selected Areas in Communications.

[18]  Azzam Mourad,et al.  Dynamic On-Demand Fog Formation Offering On-the-Fly IoT Service Deployment , 2020, IEEE Transactions on Network and Service Management.

[19]  Ofer Biran,et al.  VM Placement Strategies for Cloud Scenarios , 2012, 2012 IEEE Fifth International Conference on Cloud Computing.

[20]  Xin Xu,et al.  Reinforcement learning algorithms with function approximation: Recent advances and applications , 2014, Inf. Sci..

[21]  Yuxi Li,et al.  Deep Reinforcement Learning: An Overview , 2017, ArXiv.

[22]  Alireza Sadeghi,et al.  Optimal and Scalable Caching for 5G Using Reinforcement Learning of Space-Time Popularities , 2017, IEEE Journal of Selected Topics in Signal Processing.

[23]  Wei Zhao,et al.  Migration Modeling and Learning Algorithms for Containers in Fog Computing , 2019, IEEE Transactions on Services Computing.

[24]  Jamal Bentahar,et al.  FScaler: Automatic Resource Scaling of Containers in Fog Clusters Using Reinforcement Learning , 2020, 2020 International Wireless Communications and Mobile Computing (IWCMC).

[25]  Chadi Assi,et al.  Unmanned Aerial Vehicles as Store-Carry-Forward Nodes for Vehicular Networks , 2017, IEEE Access.

[26]  Lei Zhao,et al.  Optimal Placement of Virtual Machines for Supporting Multiple Applications in Mobile Edge Networks , 2018, IEEE Transactions on Vehicular Technology.

[27]  ChenMin,et al.  A Dynamic Service Migration Mechanism in Edge Cognitive Computing , 2019 .

[28]  Prem Prakash Jayaraman,et al.  Fog Computing: Survey of Trends, Architectures, Requirements, and Research Directions , 2018, IEEE Access.

[29]  Rajkumar Buyya,et al.  Fog Computing: A Taxonomy, Survey and Future Directions , 2016, Internet of Everything.

[30]  Massoud Pedram,et al.  Energy-Efficient Virtual Machine Replication and Placement in a Cloud Computing System , 2012, 2012 IEEE Fifth International Conference on Cloud Computing.

[31]  Azzam Mourad,et al.  Towards Dynamic On-Demand Fog Computing Formation Based On Containerization Technology , 2018, 2018 International Conference on Computational Science and Computational Intelligence (CSCI).

[32]  Schahram Dustdar,et al.  Towards QoS-Aware Fog Service Placement , 2017, 2017 IEEE 1st International Conference on Fog and Edge Computing (ICFEC).

[33]  Ying-Chang Liang,et al.  Applications of Deep Reinforcement Learning in Communications and Networking: A Survey , 2018, IEEE Communications Surveys & Tutorials.

[34]  Thar Baker,et al.  An Efficient Multi-Cloud Service Composition Using a Distributed Multiagent-Based, Memory-Driven Approach , 2021, IEEE Transactions on Sustainable Computing.

[35]  Alex Graves,et al.  Playing Atari with Deep Reinforcement Learning , 2013, ArXiv.

[36]  Alberto Ceselli,et al.  Mobile Edge Cloud Network Design Optimization , 2017, IEEE/ACM Transactions on Networking.

[37]  Ali Ghrayeb,et al.  Optimized Provisioning of Edge Computing Resources With Heterogeneous Workload in IoT Networks , 2019, IEEE Transactions on Network and Service Management.

[38]  Andrew W. Moore,et al.  Reinforcement Learning: A Survey , 1996, J. Artif. Intell. Res..

[39]  Zhu Han,et al.  Joint Optimization of Caching, Computing, and Radio Resources for Fog-Enabled IoT Using Natural Actor–Critic Deep Reinforcement Learning , 2019, IEEE Internet of Things Journal.

[40]  Valeria Cardellini,et al.  Elastic Deployment of Software Containers in Geo-Distributed Computing Environments , 2019, 2019 IEEE Symposium on Computers and Communications (ISCC).

[41]  Gerald Tesauro,et al.  Temporal Difference Learning and TD-Gammon , 1995, J. Int. Comput. Games Assoc..

[42]  Xianfu Lei,et al.  MARL-Based Distributed Cache Placement for Wireless Networks , 2019, IEEE Access.