Deep Reinforcement Learning for Fog Computing-based Vehicular System with Multi-operator Support

This paper studies the potential performance improvement that can be achieved by enabling multi-operator wireless connectivity for cloud/fog computing-connected vehicular systems. Mobile network operator (MNO) selection and switching problem is formulated by jointly considering switching cost, quality-of-service (QoS) variations between MNOs, and the different prices that can be charged by different MNOs as well as cloud and fog servers. A double deep Q network (DQN) based switching policy is proposed and proved to be able to minimize the long-term average cost of each vehicle with guaranteed latency and reliability performance. The performance of the proposed approach is evaluated using the dataset collected in a commercially available city-wide LTE network. Simulation results show that our proposed policy can significantly reduce the cost paid by each fog/cloud-connected vehicle with guaranteed latency services.

[1]  Walid Saad,et al.  An Online Optimization Framework for Distributed Fog Network Formation With Minimal Latency , 2017, IEEE Transactions on Wireless Communications.

[2]  Cheng-Xiang Wang,et al.  Capacity Analysis of a Multi-Cell Multi-Antenna Cooperative Cellular Network with Co-Channel Interference , 2011, IEEE Transactions on Wireless Communications.

[3]  Walid Saad,et al.  Joint Communication and Control for Wireless Autonomous Vehicular Platoon Systems , 2018, IEEE Transactions on Communications.

[4]  Ke Zhang,et al.  Mobile-Edge Computing for Vehicular Networks: A Promising Network Paradigm with Predictive Off-Loading , 2017, IEEE Veh. Technol. Mag..

[5]  Xiaohu Ge,et al.  User Mobility Evaluation for 5G Small Cell Networks Based on Individual Mobility Model , 2015, IEEE Journal on Selected Areas in Communications.

[6]  Marwan Krunz,et al.  Distributed Optimization for Energy-Efficient Fog Computing in the Tactile Internet , 2018, IEEE Journal on Selected Areas in Communications.

[7]  Yuan Li,et al.  Learning how to Active Learn: A Deep Reinforcement Learning Approach , 2017, EMNLP.

[8]  Hao Liang,et al.  Optimal Workload Allocation in Fog-Cloud Computing Toward Balanced Delay and Power Consumption , 2016, IEEE Internet of Things Journal.

[9]  Marwan Krunz,et al.  Driving in the Fog: Latency Measurement, Modeling, and Optimization of LTE-based Fog Computing for Smart Vehicles , 2019, 2019 16th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON).

[10]  Marwan Krunz,et al.  Dynamic Network Slicing for Scalable Fog Computing Systems With Energy Harvesting , 2018, IEEE Journal on Selected Areas in Communications.

[11]  David Silver,et al.  Deep Reinforcement Learning with Double Q-Learning , 2015, AAAI.

[12]  Peter Dayan,et al.  Q-learning , 1992, Machine Learning.

[13]  Yuguang Fang,et al.  Quantifying Benefits in a Business Portfolio for Multi-Operator Spectrum Sharing , 2015, IEEE Transactions on Wireless Communications.

[14]  Luis Alonso,et al.  Game-Theoretic Infrastructure Sharing in Multioperator Cellular Networks , 2016, IEEE Transactions on Vehicular Technology.