A Reinforcement Learning-based Radio Resource Management Algorithm for D2D-based V2V Communication

Device-to-Device (D2D) communication is an emergent technology that provides many advantages for the LTE-A networks as higher spectral efficiency and wireless Peer-to-Peer services. It is considered as a promising technology used in many different fields like public safety, network traffic offloading, and social applications and services. However, the integration of D2D communications in cellular networks creates two main challenges. First, the interference caused by the D2D links to the cellular links could significantly affect the performance of the cellular devices. Second, the minimum QoS requirements of D2D communications need to be guaranteed. Thus, the synchronization between devices becomes a necessity while Radio Resource Management (RRM) always represents a challenge. In this paper, we study the RRM problem for Vehicle-to-Vehicle (V2V) communication. A dynamic neural Q-learning-based resource allocation and resource sharing algorithm is proposed for D2D-based V2V communication in the LTE-A cellular networks. Simulation results show that the proposed algorithm is able to offer the best-performing allocations to improve network performance.

[1]  Zhi Ding,et al.  Cooperative coexistence and resource allocation for V2X communications in LTE-unlicensed , 2018, 2018 15th IEEE Annual Consumer Communications & Networking Conference (CCNC).

[2]  Geoffrey Ye Li,et al.  Deep Reinforcement Learning Based Resource Allocation for V2V Communications , 2018, IEEE Transactions on Vehicular Technology.

[3]  Faouzi Zarai,et al.  Cell Performance-Optimization Scheduling Algorithm Using Reinforcement Learning for LTE-Advanced Network , 2017, 2017 IEEE/ACS 14th International Conference on Computer Systems and Applications (AICCSA).

[4]  Wolfgang Kellerer,et al.  Location dependent resource allocation for mobile device-to-device communications , 2014, 2014 IEEE Wireless Communications and Networking Conference (WCNC).

[5]  Peter Dayan,et al.  Q-learning , 1992, Machine Learning.

[6]  Erik G. Ström,et al.  Resource allocation for V2X communications: A local search based 3D matching approach , 2017, 2017 IEEE International Conference on Communications (ICC).

[7]  Li Zhao,et al.  Support for vehicle-to-everything services based on LTE , 2016, IEEE Wireless Communications.

[8]  Sangmi Moon,et al.  A Position-based Resource Allocation Scheme for V2V Communication , 2018, Wirel. Pers. Commun..

[9]  Xiang Cheng,et al.  Interference Graph-Based Resource-Sharing Schemes for Vehicular Networks , 2013, IEEE Transactions on Vehicular Technology.

[10]  Hazem H. Refai,et al.  Adaptive D2D resources allocation underlaying (2-tier) heterogeneous cellular networks , 2017, 2017 IEEE 28th Annual International Symposium on Personal, Indoor, and Mobile Radio Communications (PIMRC).

[11]  Pascal Urien,et al.  Internet of Things: A Definition & Taxonomy , 2015, 2015 9th International Conference on Next Generation Mobile Applications, Services and Technologies.

[12]  Mahmoud M. Elmesalawy,et al.  Uplink resource allocation and power control for D2D communications underlaying multi-cell mobile networks , 2018 .

[13]  Erik G. Ström,et al.  Radio Resource Management for D2D-Based V2V Communication , 2016, IEEE Transactions on Vehicular Technology.

[14]  Dragana Krstic,et al.  Outage Probability Comparison of MRC, EGC and SC Receivers over Short Term Fading Channels , 2016 .

[15]  Aymen Belghith,et al.  A Q-learning-based Scheduler Technique for LTE and LTE-Advanced Network , 2017, WINSYS.

[16]  Kais Mnif,et al.  Radio Resource Allocation Algorithm for Device to Device based on LTE-V2X Communications. , 2018 .

[17]  Kais Mnif,et al.  Radio Resource Allocation Algorithm for Device to Device based on LTE-V2X Communications , 2018, ICETE.