Poster Abstract: Deep Reinforcement Learning-based Resource Allocation in Vehicular Fog Computing

In vehicular fog computing (VFC), it is challenging to design efficient resource allocation (RA) to satisfy the latency requirements of emerging vehicular applications due to the limited network resources and dynamically changing resource availability. In this paper, we formulate the problem of VFC resource allocation (VFC-RA) and utilize reinforcement learning (RL) to predict the availability of VFC resources and service demands. We also propose a training method to decompose the high dimensional continuous action space into a three-dimensional grid so that the efficiency of training deep neural networks (DNNs) can be improved.