Encoding Distributional Soft Actor-Critic for Autonomous Driving in Multi-lane Scenarios

In this paper, we propose a new reinforcement learning (RL) algorithm, called encoding distributional soft actorcritic (E-DSAC), for decision-making in autonomous driving. Unlike existing RL-based decision-making methods, E-DSAC is suitable for situations where the number of surrounding vehicles is variable and eliminates the requirement for manually predesigned sorting rules, resulting in higher policy performance and generality. We first develop an encoding distributional policy iteration (DPI) framework by embedding a permutation invariant module, which employs a feature neural network (NN) to encode the indicators of each vehicle, in the distributional RL framework. The proposed DPI framework is proved to exhibit important properties in terms of convergence and global optimality. Next, based on the developed encoding DPI framework, we propose the E-DSAC algorithm by adding the gradient-based update rule of the feature NN to the policy evaluation process of the DSAC algorithm. Then, the multi-lane driving task and the corresponding reward function are designed to verify the effectiveness of the proposed algorithm. Results show that the policy learned by E-DSAC can realize efficient, smooth, and relatively safe autonomous driving in the designed scenario. And the final policy performance learned by E-DSAC is about three times that of DSAC. Furthermore, its effectiveness has also been verified in real vehicle experiments.

[1]  Junmin Wang,et al.  A Novel Vehicle Tracking Method for Cross-Area Sensor Fusion with Reinforcement Learning Based GMM , 2020, 2020 American Control Conference (ACC).

[2]  Demis Hassabis,et al.  Mastering the game of Go without human knowledge , 2017, Nature.

[3]  Sergey Levine,et al.  Reinforcement Learning with Deep Energy-Based Policies , 2017, ICML.

[4]  Jingliang Duan,et al.  Fixed-Dimensional and Permutation Invariant State Representation of Autonomous Driving , 2021, IEEE Transactions on Intelligent Transportation Systems.

[5]  Yun-Pang Flötteröd,et al.  Microscopic Traffic Simulation using SUMO , 2018, 2018 21st International Conference on Intelligent Transportation Systems (ITSC).

[6]  Kurt Hornik,et al.  Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networks , 1990, Neural Networks.

[7]  Rubo Zhang,et al.  Inverse Reinforcement Learning via Neural Network in Driver Behavior Modeling , 2018, 2018 IEEE Intelligent Vehicles Symposium (IV).

[8]  Sanjiv Singh,et al.  The DARPA Urban Challenge: Autonomous Vehicles in City Traffic, George Air Force Base, Victorville, California, USA , 2009, The DARPA Urban Challenge.

[9]  Jingliang Duan,et al.  Distributional Soft Actor-Critic: Off-Policy Reinforcement Learning for Addressing Value Estimation Errors. , 2021, IEEE transactions on neural networks and learning systems.

[10]  Herke van Hoof,et al.  Addressing Function Approximation Error in Actor-Critic Methods , 2018, ICML.

[11]  Etienne Perot,et al.  End-to-End Driving in a Realistic Racing Game with Deep Reinforcement Learning , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).

[12]  Kevin Gimpel,et al.  Gaussian Error Linear Units (GELUs) , 2016 .

[13]  Ching-Yao Chan,et al.  Continuous Control for Automated Lane Change Behavior Based on Deep Deterministic Policy Gradient Algorithm , 2019, 2019 IEEE Intelligent Vehicles Symposium (IV).

[14]  Marcelo H. Ang,et al.  Perception, Planning, Control, and Coordination for Autonomous Vehicles , 2017 .

[15]  Fawzi Nashashibi,et al.  End-to-End Race Driving with Deep Reinforcement Learning , 2018, 2018 IEEE International Conference on Robotics and Automation (ICRA).

[16]  Matthias Althoff,et al.  High-level Decision Making for Safe and Reasonable Autonomous Lane Changing using Reinforcement Learning , 2018, 2018 21st International Conference on Intelligent Transportation Systems (ITSC).

[17]  Sergey Levine,et al.  Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor , 2018, ICML.

[18]  Henry Zhu,et al.  Soft Actor-Critic Algorithms and Applications , 2018, ArXiv.

[19]  Germán Ros,et al.  CARLA: An Open Urban Driving Simulator , 2017, CoRL.

[20]  Masayoshi Tomizuka,et al.  Model-free Deep Reinforcement Learning for Urban Autonomous Driving , 2019, 2019 IEEE Intelligent Transportation Systems Conference (ITSC).

[21]  Qi Sun,et al.  Centralized Cooperation for Connected and Automated Vehicles at Intersections by Proximal Policy Optimization , 2020, IEEE Transactions on Vehicular Technology.

[22]  Ching-Yao Chan,et al.  A Reinforcement Learning Based Approach for Automated Lane Change Maneuvers , 2018, 2018 IEEE Intelligent Vehicles Symposium (IV).

[23]  David Janz,et al.  Learning to Drive in a Day , 2018, 2019 International Conference on Robotics and Automation (ICRA).

[24]  Ching-Yao Chan,et al.  Formulation of deep reinforcement learning architecture toward autonomous driving for on-ramp merge , 2017, 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC).

[25]  Yuval Tassa,et al.  Continuous control with deep reinforcement learning , 2015, ICLR.

[26]  Alec Radford,et al.  Proximal Policy Optimization Algorithms , 2017, ArXiv.

[27]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[28]  Shane Legg,et al.  Human-level control through deep reinforcement learning , 2015, Nature.

[29]  Johann Marius Zöllner,et al.  Learning how to drive in a real world simulation with deep Q-Networks , 2017, 2017 IEEE Intelligent Vehicles Symposium (IV).

[30]  David Isele,et al.  Navigating Occluded Intersections with Autonomous Vehicles Using Deep Reinforcement Learning , 2018, 2018 IEEE International Conference on Robotics and Automation (ICRA).