DRLD-SP: A Deep-Reinforcement-Learning-Based Dynamic Service Placement in Edge-Enabled Internet of Vehicles

The growth of 5G and edge computing has enabled the emergence of Internet of Vehicles. It supports different types of services with different resource and service requirements. However, limited resources at the edge, high mobility of vehicles, increasing demand, and dynamicity in service request-types have made service placement a challenging task. A typical static placement solution is not effective as it does not consider the traffic mobility and service dynamics. Handling dynamics in IoV for service placement is an important and challenging problem which is the primary focus of our work in this paper. We propose a Deep Reinforcement Learning-based Dynamic Service Placement (DRLD-SP) framework with the objective of minimizing the maximum edge resource usage and service delay while considering the vehicle’s mobility, varying demand, and dynamics in the requests for different types of services. We use SUMO and MATLAB to carry out simulation experiments. The experimental results show that the proposed DRLD-SP approach is effective and outperforms other static and dynamic placement approaches.

[1]  Raj Jain,et al.  A Quantitative Measure Of Fairness And Discrimination For Resource Allocation In Shared Computer Systems , 1998, ArXiv.

[2]  Hailin Zhang,et al.  Reliable Computation Offloading for Edge-Computing-Enabled Software-Defined IoV , 2020, IEEE Internet of Things Journal.

[3]  Jin Wang,et al.  Deep-Reinforcement-Learning-Based Offloading Scheduling for Vehicular Edge Computing , 2020, IEEE Internet of Things Journal.

[4]  Xia Fan,et al.  Pre-Migration of Vehicle to Network Services Based on Priority in Mobile Edge Computing , 2019, IEEE Access.

[5]  IMT Vision – Framework and overall objectives of the future development of IMT for 2020 and beyond M Series Mobile , radiodetermination , amateur and related satellite services , 2015 .

[6]  Gurusamy Mohan,et al.  Reinforcement Learning-based Dynamic Service Placement in Vehicular Networks , 2021, 2021 IEEE 93rd Vehicular Technology Conference (VTC2021-Spring).

[7]  Yasin Yilmaz,et al.  Deep Reinforcement Learning for Intelligent Transportation Systems: A Survey , 2020, IEEE Transactions on Intelligent Transportation Systems.

[8]  Jun Huang,et al.  Intelligent Edge Computing in Internet of Vehicles: A Joint Computation Offloading and Caching Solution , 2021, IEEE Transactions on Intelligent Transportation Systems.

[9]  Tarik Taleb,et al.  Follow-Me Cloud: When Cloud Services Follow Mobile Users , 2019, IEEE Transactions on Cloud Computing.

[10]  Bharadwaj Veeravalli,et al.  Reinforcement learning-enabled genetic algorithm for school bus scheduling , 2020 .

[11]  Deze Zeng,et al.  Migrate or not? Exploring virtual machine migration in roadside cloudlet‐based vehicular cloud , 2015, Concurr. Comput. Pract. Exp..

[12]  Abdallah Shami,et al.  Multi-Component V2X Applications Placement in Edge Computing Environment , 2020, ArXiv.

[13]  Albert Y. Zomaya,et al.  Follow Me Fog: Toward Seamless Handover Timing Schemes in a Fog Computing Environment , 2017, IEEE Communications Magazine.

[14]  Birger Jansson,et al.  Choosing a Good Appointment System - A Study of Queues of the Type (D, M, 1) , 1966, Oper. Res..

[15]  Carlos Renato Storck,et al.  A Survey of 5G Technology Evolution, Standards, and Infrastructure Associated With Vehicle-to-Everything Communications by Internet of Vehicles , 2020, IEEE Access.

[16]  Rose Qingyang Hu,et al.  Mobility-Aware Edge Caching and Computing in Vehicle Networks: A Deep Reinforcement Learning , 2018, IEEE Transactions on Vehicular Technology.

[17]  Parisa Heidari,et al.  Cost-optimal V2X Service Placement in Distributed Cloud/Edge Environment , 2020, 2020 16th International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob)(50308).

[18]  Arijit Raychowdhury,et al.  Autonomous Navigation via Deep Reinforcement Learning for Resource Constraint Edge Nodes Using Transfer Learning , 2019, IEEE Access.

[19]  Ying-Dar Lin,et al.  Cost Minimization with Offloading to Vehicles in two-Tier Federated Edge and Vehicular-Fog Systems , 2019, 2019 IEEE 90th Vehicular Technology Conference (VTC2019-Fall).

[20]  Fangmin Xu,et al.  Regional Intelligent Resource Allocation in Mobile Edge Computing Based Vehicular Network , 2020, IEEE Access.

[21]  Tarik Taleb,et al.  Survey on Multi-Access Edge Computing for Internet of Things Realization , 2018, IEEE Communications Surveys & Tutorials.

[22]  Tarik Taleb,et al.  Latency-aware Service Placement and Live Migrations in 5G and Beyond Mobile Systems , 2020, ICC 2020 - 2020 IEEE International Conference on Communications (ICC).

[23]  Jintao Li,et al.  Deep Reinforcement Learning-Based Dynamic Service Migration in Vehicular Networks , 2019, 2019 IEEE Global Communications Conference (GLOBECOM).

[24]  Yan Zhang,et al.  Deep Reinforcement Learning for Cooperative Content Caching in Vehicular Edge Computing and Networks , 2020, IEEE Internet of Things Journal.

[25]  Victor Talpaert,et al.  Deep Reinforcement Learning for Autonomous Driving: A Survey , 2020, IEEE Transactions on Intelligent Transportation Systems.

[26]  Xiaohui Zhang,et al.  Reinforcement Learning Enabled Dynamic Resource Allocation in the Internet of Vehicles , 2021, IEEE Transactions on Industrial Informatics.

[27]  Taewon Hwang,et al.  Joint Task Scheduling and Containerizing for Efficient Edge Computing , 2021, IEEE Transactions on Parallel and Distributed Systems.