Self-learning drift control of automated vehicles beyond handling limit after rear-end collision

Vehicles involved in traffic accidents generally experience divergent vehicle motion, which causes severe damage. This paper presents a self-learning drift-control method for the purpose of stabilizing a vehicle's yaw motions after a high-speed rear-end collision. The struck vehicle generally experiences substantial drifting and/or spinning after the collision, which is beyond the handling limit and difficult to control. Drift control of the struck vehicle along the original lane was investigated. The rear-end collision was treated as a set of impact forces, and the three-dimensional non-linear dynamic responses of the vehicle were considered in the drift control. A multi-layer perception neural network was trained as a deterministic control policy using the actor-critic reinforcement learning framework. The control policy was iteratively updated, initiating from a random parameterized policy. The results show that the self-learning controller gained the ability to eliminate unstable vehicle motion after data-driven training of about 60,000 iterations. The controlled struck vehicle was also able to drift back to its original lane in a variety of rear-end collision scenarios, which could significantly reduce the risk of a second collision in traffic.

[1]  Hans B. Pacejka,et al.  A New Tire Model with an Application in Vehicle Dynamics Studies , 1989 .

[2]  R. Brach,et al.  Mechanical Impact Dynamics: Rigid Body Collisions , 1991 .

[3]  E. K. Liebemann,et al.  Safety and Performance Enhancement: The Bosch Electronic Stability Control (ESP) , 2005 .

[4]  Susan A Ferguson,et al.  The Effectiveness of Electronic Stability Control in Reducing Real-World Crashes: A Literature Review , 2007, Traffic injury prevention.

[5]  Jing Zhou,et al.  Vehicle stabilisation in response to exogenous impulsive disturbances to the vehicle body , 2010 .

[6]  Alena Høye,et al.  The effects of electronic stability control (ESC) on crashes--an update. , 2011, Accident; analysis and prevention.

[7]  Huei Peng,et al.  Vehicle Stability Control of Heading Angle and Lateral Deviation to Mitigate Secondary Collisions , 2012 .

[8]  Guy Lever,et al.  Deterministic Policy Gradient Algorithms , 2014, ICML.

[9]  Sergey Ioffe,et al.  Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.

[10]  Shane Legg,et al.  Human-level control through deep reinforcement learning , 2015, Nature.

[11]  Yuval Tassa,et al.  Continuous control with deep reinforcement learning , 2015, ICLR.

[12]  Jonathan P. How,et al.  Autonomous drifting using simulation-aided reinforcement learning , 2016, 2016 IEEE International Conference on Robotics and Automation (ICRA).

[13]  Yang Zheng,et al.  Dynamical Modeling and Distributed Control of Connected and Automated Vehicles: Challenges and Opportunities , 2017, IEEE Intelligent Transportation Systems Magazine.

[14]  Francesco Borrelli,et al.  Drift control for cornering maneuver of autonomous vehicles , 2018, Mechatronics.

[15]  Marcin Andrychowicz,et al.  Parameter Space Noise for Exploration , 2017, ICLR.

[16]  Sergey Levine,et al.  Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor , 2018, ICML.

[17]  Andrea Tonoli,et al.  Combined regression and classification artificial neural networks for sideslip angle estimation and road condition identification , 2020, Vehicle System Dynamics.

[18]  Yanjun Huang,et al.  An adaptive SMC controller for EVs with four IWMs handling and stability enhancement based on a stability index , 2020 .

[19]  L. Tai,et al.  High-Speed Autonomous Drifting With Deep Reinforcement Learning , 2020, IEEE Robotics and Automation Letters.

[20]  Jingliang Duan,et al.  Direct and indirect reinforcement learning , 2019, Int. J. Intell. Syst..