Towards Real-World Deployment of Reinforcement Learning for Traffic Signal Control

Sub-optimal control policies in intersection traffic signal controllers (TSC) contribute to congestion and lead to negative effects on human health and the environment. Reinforcement learning (RL) for traffic signal control is a promising approach to design better control policies and has attracted considerable research interest in recent years. However, most work done in this area used simplified simulation environments of traffic scenarios to train RL-based TSC. To deploy RL in real-world traffic systems, the gap between simplified simulation environments and real-world applications has to be closed. Therefore, we propose LemgoRL, a benchmark tool to train RL agents as TSC in a realistic simulation environment of Lemgo, a medium-sized town in Germany. In addition to the realistic simulation model, LemgoRL encompasses a traffic signal logic unit that ensures compliance with all regulatory and safety requirements. LemgoRL offers the same interface as the well-known OpenAI gym toolkit to enable easy deployment in existing research work. To demonstrate the functionality and applicability of LemgoRL, we train a state-of-the-art Deep RL algorithm on a CPU cluster utilizing a framework for distributed and parallel RL and compare its performance with other methods. Our benchmark tool drives the development of RL algorithms towards real-world applications.

[1]  Tianshu Chu,et al.  Multi-Agent Deep Reinforcement Learning for Large-Scale Traffic Signal Control , 2019, IEEE Transactions on Intelligent Transportation Systems.

[2]  Sharad Gokhale,et al.  Evaluating effects of traffic and vehicle characteristics on vehicular emissions near traffic intersections , 2009 .

[3]  Mee Hong Ling,et al.  A Survey on Reinforcement Learning Models and Algorithms for Traffic Signal Control , 2017, ACM Comput. Surv..

[4]  Yasin Yilmaz,et al.  Deep Reinforcement Learning for Intelligent Transportation Systems: A Survey , 2020, IEEE Transactions on Intelligent Transportation Systems.

[5]  Stefan Krauss,et al.  MICROSCOPIC MODELING OF TRAFFIC FLOW: INVESTIGATION OF COLLISION FREE VEHICLE DYNAMICS. , 1998 .

[6]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[7]  Saiedeh Razavi,et al.  An Open-Source Framework for Adaptive Traffic Signal Control , 2019, ArXiv.

[8]  Alec Radford,et al.  Proximal Policy Optimization Algorithms , 2017, ArXiv.

[9]  Wojciech Zaremba,et al.  OpenAI Gym , 2016, ArXiv.

[10]  Yun-Pang Flötteröd,et al.  Microscopic Traffic Simulation using SUMO , 2018, 2018 21st International Conference on Intelligent Transportation Systems (ITSC).

[11]  Kay W. Axhausen,et al.  The Multi-Agent Transport Simulation , 2016 .

[12]  Gordon D. B. Cameron,et al.  PARAMICS—Parallel microscopic simulation of road traffic , 1996, The Journal of Supercomputing.

[13]  Demis Hassabis,et al.  Mastering the game of Go without human knowledge , 2017, Nature.

[14]  David Schrank,et al.  Urban Mobility Report 2019 , 2019 .

[15]  Ana L. C. Bazzan,et al.  Quantifying the impact of non-stationarity in reinforcement learning-based traffic signal control , 2021, PeerJ Comput. Sci..

[16]  Michael I. Jordan,et al.  RLlib: Abstractions for Distributed Reinforcement Learning , 2017, ICML.

[17]  Josep Perarnau,et al.  Traffic Simulation with Aimsun , 2010 .