Intelligent and resizable control plane for software defined vehicular network: a deep reinforcement learning approach
暂无分享,去创建一个
Software-Defined Networking (SDN) has become one of the most promising paradigms to manage large scale networks. Distributing the SDN Control proved its performance in terms of resiliency and scalability. However, the choice of the number of controllers to use remains problematic. A large number of controllers may be oversized inducing an overhead in the investment cost and the synchronization cost in terms of delay and traffic load. However, a small number of controllers may be insufficient to achieve the objective of the distributed approach. So, the number of used controllers should be tuned in function of the traffic charge and application requirements. In this paper, we present an Intelligent and Resizable Control Plane for Software Defined Vehicular Network architecture (IRCP-SDVN), where SDN capabilities coupled with Deep Reinforcement Learning (DRL) allow achieving better QoS for Vehicular Applications. Interacting with SDVN, DRL agent decides the optimal number of distributed controllers to deploy according to the network environment (number of vehicles, load, speed etc.). To the best of our knowledge, this is the first work that adjusts the number of controllers by learning from the vehicular environment dynamicity. Experimental results proved that our proposed system outperforms static distributed SDVN architecture in terms of end-to-end delay and packet loss.