Q-Learning Based Task Offloading and Resource Allocation Scheme for Internet of Vehicles

In this paper, the task offloading and resource allocation problem for the Internet of Vehicles (IoV) is investigated. In our considered offloading scheme, a Bayesian classifier is first adopted to classify the task according to its different requirements in latency and energy consumption. Based on the classification results, each vehicle user equipment (VUE) then selects the corresponding offloading mode. More specifically, if the VUE has higher requirements for energy consumption, the task will be carried out at other vehicles through the vehicle to vehicle (V2V) offloading mode. Otherwise, it will choose to offload the task through mobile edge computing (MEC) offloading mode. To achieve a trade-off between latency requirement and energy consumption in the task executing process through offloading decision, we formulate the offloading and resource allocation scheme as a mixed-integer non-linear problem. To achieve an approximate solution, a Q-learning based solution is proposed. Simulation results demonstrate that the proposed scheme has better performance in terms of higher system throughput, lower latency, and lower energy consumption compared with the existing schemes.

[1]  Huaiyu Dai,et al.  A Truthful Reverse-Auction Mechanism for Computation Offloading in Cloud-Enabled Vehicular Network , 2019, IEEE Internet of Things Journal.

[2]  Kaibin Huang,et al.  Energy-Efficient Resource Allocation for Mobile-Edge Computation Offloading , 2016, IEEE Transactions on Wireless Communications.

[3]  Xianchao Wang,et al.  Optimal relay selection based on social threshold for D2D communications underlay cellular networks , 2016, 2016 8th International Conference on Wireless Communications & Signal Processing (WCSP).

[4]  Shaohua Wan,et al.  A Computation Offloading Method for Edge Computing With Vehicle-to-Everything , 2019, IEEE Access.

[5]  Huimin Yu,et al.  Deep Reinforcement Learning for Offloading and Resource Allocation in Vehicle Edge Computing and Networks , 2019, IEEE Transactions on Vehicular Technology.

[6]  Yunlong Cai,et al.  Latency Optimization for Resource Allocation in Mobile-Edge Computation Offloading , 2017, IEEE Transactions on Wireless Communications.

[7]  Jian Zhao,et al.  Computation Offloading and Resource Allocation in Mobile Edge Computing via Reinforcement Learning , 2019, 2019 11th International Conference on Wireless Communications and Signal Processing (WCSP).

[8]  Nan Yu,et al.  Radio resource allocation for D2D-based V2V communications with Lyapunov optimization , 2017, 2017 IEEE/CIC International Conference on Communications in China (ICCC).

[9]  Tiejun Lv,et al.  Deep reinforcement learning based computation offloading and resource allocation for MEC , 2018, 2018 IEEE Wireless Communications and Networking Conference (WCNC).

[10]  Erik G. Ström,et al.  Radio Resource Management for D2D-Based V2V Communication , 2016, IEEE Transactions on Vehicular Technology.