A Reinforcement Learning Based Task Offloading Scheme for Vehicular Edge Computing Network

Recently, the trends of automation and intelligence in vehicular networks have led to the emergence of intelligent connected vehicles (ICVs), and various intelligent applications like autonomous driving have also rapidly developed. Usually, these applications are compute-intensive, and require large amounts of computation resources, which conflicts with resource-limited vehicles. This contradiction becomes a bottleneck in the development of vehicular networks. To address this challenge, the researchers combined mobile edge computing (MEC) with vehicular networks, and proposed vehicular edge computing networks (VECNs). The deploying of MEC servers near the vehicles allows compute-intensive applications to be offloaded to MEC servers for execution, so as to alleviate vehicles’ computational pressure. However, the high dynamic feature which makes traditional optimization algorithms like convex/non-convex optimization less suitable for vehicular networks, often lacks adequate consideration in the existing task offloading schemes. Toward this end, we propose a reinforcement learning based task offloading scheme, i.e., a deep Q learning algorithm, to solve the delay minimization problem in VECNs. Extensive numerical results corroborate the superior performance of our proposed scheme on reducing the processing delay of vehicles’ computation tasks.

[1]  Xing Zhang,et al.  A Survey on Mobile Edge Networks: Convergence of Computing, Caching and Communications , 2017, IEEE Access.

[2]  Henry Leung,et al.  Overview of Environment Perception for Intelligent Vehicles , 2017, IEEE Transactions on Intelligent Transportation Systems.

[3]  Jie Zhang,et al.  Mobile-Edge Computation Offloading for Ultradense IoT Networks , 2018, IEEE Internet of Things Journal.

[4]  Enzo Baccarelli,et al.  Energy-Efficient Adaptive Resource Management for Real-Time Vehicular Cloud Services , 2019, IEEE Transactions on Cloud Computing.

[5]  Tiejun Lv,et al.  Deep reinforcement learning based computation offloading and resource allocation for MEC , 2018, 2018 IEEE Wireless Communications and Networking Conference (WCNC).

[6]  Du Xu,et al.  Joint Load Balancing and Offloading in Vehicular Edge Computing and Networks , 2019, IEEE Internet of Things Journal.

[7]  Anna Maria Vegni,et al.  A Survey on Vehicular Social Networks , 2015, IEEE Communications Surveys & Tutorials.

[8]  Victor C. M. Leung,et al.  Delay-Optimal Virtualized Radio Resource Scheduling in Software-Defined Vehicular Networks via Stochastic Learning , 2016, IEEE Transactions on Vehicular Technology.

[9]  Jie Huang,et al.  A Computation Offloading Algorithm Based on Game Theory for Vehicular Edge Networks , 2018, 2018 IEEE International Conference on Communications (ICC).

[10]  Jiajia Liu,et al.  Collaborative Computation Offloading for Multiaccess Edge Computing Over Fiber–Wireless Networks , 2018, IEEE Transactions on Vehicular Technology.

[11]  Ke Zhang,et al.  Delay constrained offloading for Mobile Edge Computing in cloud-enabled vehicular networks , 2016, 2016 8th International Workshop on Resilient Networks Design and Modeling (RNDM).

[12]  Ke Zhang,et al.  Mobile-Edge Computing for Vehicular Networks: A Promising Network Paradigm with Predictive Off-Loading , 2017, IEEE Veh. Technol. Mag..

[13]  Nan Zhao,et al.  Integrated Networking, Caching, and Computing for Connected Vehicles: A Deep Reinforcement Learning Approach , 2018, IEEE Transactions on Vehicular Technology.

[14]  Jie Zhang,et al.  FiWi-Enhanced Vehicular Edge Computing Networks: Collaborative Task Offloading , 2019, IEEE Vehicular Technology Magazine.

[15]  Geoffrey Ye Li,et al.  Deep Reinforcement Learning for Resource Allocation in V2V Communications , 2017, 2018 IEEE International Conference on Communications (ICC).