Resource Allocation for Vehicular Fog Computing Using Reinforcement Learning Combined With Heuristic Information

Internet of Vehicles (IoV) has emerged as a key component of smart cities. Connected vehicles are increasingly processing real-time data to respond immediately to user requests. However, the data must be sent to a remote cloud for processing. To resolve this issue, vehicular fog computing (VFC) has emerged as a promising paradigm that improves the quality of computation experiences for vehicles by offloading computation tasks from the cloud to network edges. Nevertheless, due to the resource restrictions of fog computing, only a limited number of vehicles are able to use it while it is still challenging to provide real-time responses for vehicular applications, such as traffic and accident warnings in the highly dynamic IoV environment. Therefore, in this article, we formulate the problem of allocating the limited fog resources to vehicular applications such that the service latency is minimized, by utilizing parked vehicles. We then propose a heuristic algorithm to efficiently find the solutions of the problem formulation. In addition, the proposed algorithm is combined with reinforcement learning to make more efficient resource allocation decisions, leveraging the vehicles’ movement and parking status collected from the smart environment of the city. Our simulation results show that our VFC resource allocation algorithm can achieve higher service satisfaction compared to conventional resource allocation algorithms.

[1]  Lei Wang,et al.  Offloading in Internet of Vehicles: A Fog-Enabled Real-Time Traffic Management System , 2018, IEEE Transactions on Industrial Informatics.

[2]  H. T. Riele,et al.  Centrum Voor Wiskunde En Informatica , 1996 .

[3]  Feng Xia,et al.  Vehicular Social Networks: Enabling Smart Mobility , 2017, IEEE Communications Magazine.

[4]  Lin Gui,et al.  Service-Oriented Dynamic Connection Management for Software-Defined Internet of Vehicles , 2017, IEEE Transactions on Intelligent Transportation Systems.

[5]  Wei Lu,et al.  Fast Service Migration Method Based on Virtual Machine Technology for MEC , 2019, IEEE Internet of Things Journal.

[6]  Antti Ylä-Jääski,et al.  Folo: Latency and Quality Optimized Task Allocation in Vehicular Fog Computing , 2019, IEEE Internet of Things Journal.

[7]  Victor C. M. Leung,et al.  SOVCAN: Safety-Oriented Vehicular Controller Area Network , 2017, IEEE Communications Magazine.

[8]  Michael P. Rogers Python Tutorial , 2009 .

[9]  Jon Crowcroft,et al.  Distributed and Energy-Efficient Mobile Crowdsensing with Charging Stations by Deep Reinforcement Learning , 2021, IEEE Transactions on Mobile Computing.

[10]  Nirwan Ansari,et al.  PRIMAL: PRofIt Maximization Avatar pLacement for mobile edge computing , 2015, 2016 IEEE International Conference on Communications (ICC).

[11]  Alec Radford,et al.  Proximal Policy Optimization Algorithms , 2017, ArXiv.

[12]  Fenglong Ma,et al.  A Reliability-Aware Vehicular Crowdsensing System for Pothole Profiling , 2019, Proc. ACM Interact. Mob. Wearable Ubiquitous Technol..

[13]  Depeng Jin,et al.  Vehicular Fog Computing: A Viewpoint of Vehicles as the Infrastructures , 2016, IEEE Transactions on Vehicular Technology.

[14]  Mohsen Guizani,et al.  Software-Defined Networking for RSU Clouds in Support of the Internet of Vehicles , 2015, IEEE Internet of Things Journal.

[15]  Frank L. Lewis,et al.  Optimal and Autonomous Control Using Reinforcement Learning: A Survey , 2018, IEEE Transactions on Neural Networks and Learning Systems.

[16]  Zhu Han,et al.  Joint Optimization of Caching, Computing, and Radio Resources for Fog-Enabled IoT Using Natural Actor–Critic Deep Reinforcement Learning , 2019, IEEE Internet of Things Journal.

[17]  F. Richard Yu,et al.  Fog Vehicular Computing: Augmentation of Fog Computing Using Vehicular Cloud Computing , 2017, IEEE Vehicular Technology Magazine.

[18]  Victor C. M. Leung,et al.  Software-Defined Networks with Mobile Edge Computing and Caching for Smart Cities: A Big Data Deep Reinforcement Learning Approach , 2017, IEEE Communications Magazine.

[19]  Jürgen Schmidhuber,et al.  Long Short-Term Memory , 1997, Neural Computation.

[20]  Alex Graves,et al.  Asynchronous Methods for Deep Reinforcement Learning , 2016, ICML.

[21]  Xiaoli Chu,et al.  Computation Offloading and Resource Allocation in Mixed Fog/Cloud Computing Systems With Min-Max Fairness Guarantee , 2018, IEEE Transactions on Communications.

[22]  Charles C. Byers,et al.  Architectural Imperatives for Fog Computing: Use Cases, Requirements, and Architectural Techniques for Fog-Enabled IoT Networks , 2017, IEEE Communications Magazine.

[23]  Susana Sargento,et al.  Data Collection from Smart-City Sensors through Large-Scale Urban Vehicular Networks , 2017, 2017 IEEE 86th Vehicular Technology Conference (VTC-Fall).

[24]  Maartje E. Zonderland Basic Queuing Theory , 2014 .

[25]  Yiqing Zhou,et al.  Heterogeneous Vehicular Networking: A Survey on Architecture, Challenges, and Solutions , 2015, IEEE Communications Surveys & Tutorials.

[26]  Lajos Hanzo,et al.  Vehicular Sensing Networks in a Smart City: Principles, Technologies and Applications , 2018, IEEE Wireless Communications.

[27]  Walid Saad,et al.  Proactive Resource Management for LTE in Unlicensed Spectrum: A Deep Learning Perspective , 2017, IEEE Transactions on Wireless Communications.

[28]  Yishay Mansour,et al.  Policy Gradient Methods for Reinforcement Learning with Function Approximation , 1999, NIPS.

[29]  Nan Zhao,et al.  Integrated Networking, Caching, and Computing for Connected Vehicles: A Deep Reinforcement Learning Approach , 2018, IEEE Transactions on Vehicular Technology.

[30]  Wenyu Zhang,et al.  Cooperative Fog Computing for Dealing with Big Data in the Internet of Vehicles: Architecture and Hierarchical Resource Management , 2017, IEEE Communications Magazine.

[31]  Xiao Chen,et al.  Exploring Fog Computing-Based Adaptive Vehicular Data Scheduling Policies Through a Compositional Formal Method—PEPA , 2017, IEEE Communications Letters.

[32]  Tao Zhang,et al.  Fog and IoT: An Overview of Research Opportunities , 2016, IEEE Internet of Things Journal.

[33]  Mohsen Guizani,et al.  Reinforcement learning for resource provisioning in the vehicular cloud , 2016, IEEE Wireless Communications.

[34]  Gerald Tesauro,et al.  Online Resource Allocation Using Decompositional Reinforcement Learning , 2005, AAAI.

[35]  Glen Berseth,et al.  DeepLoco , 2017, ACM Trans. Graph..

[36]  Yu Zhang,et al.  Intelligent Cloud Resource Management with Deep Reinforcement Learning , 2018, IEEE Cloud Computing.

[37]  Yue Li,et al.  Intelligent Parking Garage EV Charging Scheduling Considering Battery Charging Characteristic , 2018, IEEE Transactions on Industrial Electronics.

[38]  Zhi Chen,et al.  Intelligent Power Control for Spectrum Sharing in Cognitive Radios: A Deep Reinforcement Learning Approach , 2017, IEEE Access.

[39]  John B. Kenney,et al.  Dedicated Short-Range Communications (DSRC) Standards in the United States , 2011, Proceedings of the IEEE.

[40]  Sergey Levine,et al.  Trust Region Policy Optimization , 2015, ICML.

[41]  Mohsen Guizani,et al.  Semisupervised Deep Reinforcement Learning in Support of IoT and Smart City Services , 2018, IEEE Internet of Things Journal.

[42]  Seung Ho Hong,et al.  A Dynamic pricing demand response algorithm for smart grid: Reinforcement learning approach , 2018, Applied Energy.

[43]  Jacques Bughin,et al.  The internet of things: mapping the value beyond the hype , 2015 .

[44]  Luigi Iannone,et al.  A Smart Parking Lot Management System for Scheduling the Recharging of Electric Vehicles , 2015, IEEE Transactions on Smart Grid.

[45]  Zhe Wang,et al.  Application-Aware Offloading Policy Using SMDP in Vehicular Fog Computing Systems , 2018, 2018 IEEE International Conference on Communications Workshops (ICC Workshops).

[46]  Rodrigo Roman,et al.  Mobile Edge Computing, Fog et al.: A Survey and Analysis of Security Threats and Challenges , 2016, Future Gener. Comput. Syst..

[47]  Yuguang Fang,et al.  Smart Cities on Wheels: A Newly Emerging Vehicular Cognitive Capability Harvesting Network for Data Transportation , 2018, IEEE Wireless Communications.

[48]  Rajarshi Das,et al.  A Hybrid Reinforcement Learning Approach to Autonomic Resource Allocation , 2006, 2006 IEEE International Conference on Autonomic Computing.

[49]  Srikanth Kandula,et al.  Resource Management with Deep Reinforcement Learning , 2016, HotNets.