Driving Decision and Control for Automated Lane Change Behavior based on Deep Reinforcement Learning

To fulfill high-level automation, an automated vehicle needs to learn to make decisions and control its movement under complex scenarios. Due to the uncertainty and complexity of the driving environment, most classical rule-based methods cannot solve the problem of complicated decision tasks. Deep reinforcement learning has demonstrated impressive achievements in many fields such as playing games and robotics. However, a direct application of reinforcement learning algorithm for automated driving still face challenges in handling complex driving tasks. In this paper, we proposed a hierarchical reinforcement learning based architecture for decision making and control of lane changing situations. We divided the decision and control process into two correlated processes: 1) when to conduct lane change maneuver and 2) how to conduct the maneuver. To be specific, we first apply Deep Q-network (DQN) to decide when to conduct the maneuver based on the consideration of safety. Subsequently, we design a Deep Q-learning framework with quadratic approximator for deciding how to complete the maneuver in longitudinal direction (e.g. adjust to the selected gap or just follow the preceding vehicle). Finally, a polynomial lane change trajectory is generated and Pure Pursuit Control is implemented for path tracking for the lane change situation. We demonstrate the effectiveness of this framework in simulation, from both the decision-making and control layers.

[1]  Ching-Yao Chan,et al.  Formulation of deep reinforcement learning architecture toward autonomous driving for on-ramp merge , 2017, 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC).

[2]  Shannon Hetrick,et al.  Examination of Driver Lane Change Behavior and the Potential Effectiveness of Warning Onset Rules for Lane Change or "Side" Crash Avoidance Systems , 1997 .

[3]  Mykel J. Kochenderfer,et al.  Multi-Agent Imitation Learning for Driving Simulation , 2018, 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[4]  Alexandre M. Bayen,et al.  Flow: Architecture and Benchmarking for Reinforcement Learning in Traffic Control , 2017, ArXiv.

[5]  Jonas Fredriksson,et al.  If, When, and How to Perform Lane Change Maneuvers on Highways , 2016, IEEE Intelligent Transportation Systems Magazine.

[6]  Kun Cao,et al.  A dynamic automated lane change maneuver based on vehicle-to-vehicle communication , 2016 .

[7]  Francesco Borrelli,et al.  A machine learning approach for personalized autonomous lane change initiation and control , 2017, 2017 IEEE Intelligent Vehicles Symposium (IV).

[8]  Amir Khajepour,et al.  A Potential Field-Based Model Predictive Path-Planning Controller for Autonomous Road Vehicles , 2017, IEEE Transactions on Intelligent Transportation Systems.

[9]  Daniel Krajzewicz,et al.  SUMO - Simulation of Urban MObility An Overview , 2011 .

[10]  Nando de Freitas,et al.  Reinforcement and Imitation Learning for Diverse Visuomotor Skills , 2018, Robotics: Science and Systems.

[11]  Ching-Yao Chan,et al.  A Reinforcement Learning Based Approach for Automated Lane Change Maneuvers , 2018, 2018 IEEE Intelligent Vehicles Symposium (IV).

[12]  Yanjun Huang,et al.  Path Planning and Tracking for Vehicle Collision Avoidance Based on Model Predictive Control With Multiconstraints , 2017, IEEE Transactions on Vehicular Technology.

[13]  B. Kim,et al.  Model predictive control of an autonomous vehicle , 2001, 2001 IEEE/ASME International Conference on Advanced Intelligent Mechatronics. Proceedings (Cat. No.01TH8556).