Deep Reinforcement Learning for Edge Caching and Content Delivery in Internet of Vehicles

To enable the emerging vehicular applications and multimedia services in an Internet of Vehicles (IoV) framework, edge caching is a promising paradigm which can cache content in proximity to vehicles, and thus alleviate heavy load on backhaul links and contribute in reducing transmission latency. However, in a multi-access vehicular network, complex content delivery and high mobility of vehicles introduce new challenges to support edge caching in a dynamic environment. Deep Reinforcement Learning (DRL) is an emerging technique to solve the issue with time-varying feature. In this paper, we utilize DRL to design an optimal vehicular edge caching and content delivery strategy for minimizing content delivery latency. We first propose a multiaccess edge caching and content delivery framework in vehicular networks. Then, we formulate the vehicular edge caching and content delivery problem and propose a novel DRL algorithm to solve it. Numerical results demonstrate the effectiveness of proposed DRL-based algorithm, compared to two benchmark solutions.

[1]  Giuseppe Caire,et al.  Wireless Device-to-Device Caching Networks: Basic Principles and System Performance , 2013, IEEE Journal on Selected Areas in Communications.

[2]  Yan Zhang,et al.  Joint Computation Offloading and User Association in Multi-Task Mobile Edge Computing , 2018, IEEE Transactions on Vehicular Technology.

[3]  Du Xu,et al.  Joint Load Balancing and Offloading in Vehicular Edge Computing and Networks , 2019, IEEE Internet of Things Journal.

[4]  Qian He,et al.  Blockchain and Deep Reinforcement Learning Empowered Intelligent 5G Beyond , 2019, IEEE Network.

[5]  Yan Zhang,et al.  Joint Offloading and Resource Allocation in Vehicular Edge Computing and Networks , 2018, 2018 IEEE Global Communications Conference (GLOBECOM).

[6]  Min Sheng,et al.  Learning-Based Content Caching and Sharing for Wireless Networks , 2017, IEEE Transactions on Communications.

[7]  Yanhua Zhang,et al.  Delay-Tolerant Data Traffic to Software-Defined Vehicular Networks With Mobile Edge Computing in Smart City , 2018, IEEE Transactions on Vehicular Technology.

[8]  Maode Ma,et al.  A Novel Mechanism for Fast Detection of Transformed Data Leakage , 2018, IEEE Access.

[9]  Zhu Han,et al.  Joint Optimization of Caching, Computing, and Radio Resources for Fog-Enabled IoT Using Natural Actor–Critic Deep Reinforcement Learning , 2019, IEEE Internet of Things Journal.

[10]  Mohsen Guizani,et al.  Home M2M networks: Architectures, standards, and QoS improvement , 2011, IEEE Communications Magazine.

[11]  Alexandros G. Dimakis,et al.  FemtoCaching: Wireless Content Delivery Through Distributed Caching Helpers , 2013, IEEE Transactions on Information Theory.

[12]  Min Chen,et al.  Green and Mobility-Aware Caching in 5G Networks , 2017, IEEE Transactions on Wireless Communications.

[13]  Yan Zhang,et al.  Cooperative Content Caching in 5G Networks with Mobile Edge Computing , 2018, IEEE Wireless Communications.

[14]  Yuval Tassa,et al.  Continuous control with deep reinforcement learning , 2015, ICLR.

[15]  Yan Zhang,et al.  Artificial Intelligence Empowered Edge Computing and Caching for Internet of Vehicles , 2019, IEEE Wireless Communications.