A Novel Vehicle Tracking Method for Cross-Area Sensor Fusion with Reinforcement Learning Based GMM

Radars, LiDARs and cameras have been widely adopted in autonomous driving applications due to their complementary capabilities of environment perception. However, one problem lies in how to effectively improve the cross-area tracking accuracy with massive data from multiple sensors. This paper proposes a novel tracking solution that is composed of a reinforcement-learning-based Gaussian mixture model (GMM), submodel center realignment, and data-driven trajectory association. Specifically, developed with a Q-learning-based cluster number, an improved GMM-EM algorithm is firstly investigated to cluster the dense short-range radar data points. Subsequently, an innovative kinetic-energy-aware approach is presented to realign the Q-learning GMM cluster centers for position error mitigation. In addition to Q-learning GMM clustering, a weight-scheduled method is presented to associate the data from a long-range radar and cameras for cross-area object extraction and trajectory fusion. Eighteen experiments for training and one experiment for verification were conducted on a fully-instrumented autonomous vehicle. Experimental results demonstrate that a better tracking performance in crossing detection areas can be achieved by the proposed method.

[1]  Mohan M. Trivedi,et al.  No Blind Spots: Full-Surround Multi-Object Tracking for Autonomous Vehicles Using Cameras and LiDARs , 2018, IEEE Transactions on Intelligent Vehicles.

[2]  G. Kitagawa,et al.  Information Criteria and Statistical Modeling , 2007 .

[3]  Junqiang Xi,et al.  Learning and Inferring a Driver's Braking Action in Car-Following Scenarios , 2018, IEEE Transactions on Vehicular Technology.

[4]  Nanning Zheng,et al.  On-Road Vehicle Detection and Tracking Using MMW Radar and Monovision Fusion , 2016, IEEE Transactions on Intelligent Transportation Systems.

[5]  Mingcong Cao,et al.  Obstacle Detection for Autonomous Driving Vehicles With Multi-LiDAR Sensor Fusion , 2020 .

[6]  Qin Lin,et al.  Lane-Change Intention Estimation for Car-Following Control in Autonomous Driving , 2018, IEEE Transactions on Intelligent Vehicles.

[7]  Thierry Peynot,et al.  Characterisation of the Delphi Electronically Scanning Radar for robotics applications , 2015, ICRA 2015.

[8]  Yi Su,et al.  Vehicles Detection in Complex Urban Scenes Using Gaussian Mixture Model With FMCW Radar , 2017, IEEE Sensors Journal.

[9]  Junqiang Xi,et al.  A Learning-Based Approach for Lane Departure Warning Systems With a Personalized Driver Model , 2017, IEEE Transactions on Vehicular Technology.

[10]  Cheng Wu,et al.  Millimeter Wave Radar Target Tracking Based on Adaptive Kalman Filter , 2018, 2018 IEEE Intelligent Vehicles Symposium (IV).

[11]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[12]  Matthew Derry,et al.  Multi-Target Track-to-Track Fusion Based on Permutation Matrix Track Association , 2018, 2018 IEEE Intelligent Vehicles Symposium (IV).

[13]  Junmin Wang,et al.  Autonomous ground vehicle control system for high-speed and safe operation , 2008, 2008 American Control Conference.

[14]  Ali Farhadi,et al.  YOLOv3: An Incremental Improvement , 2018, ArXiv.

[15]  Chen Zhang,et al.  Vehicle Detection, Tracking and Behavior Analysis in Urban Driving Environments Using Road Context , 2018, 2018 IEEE International Conference on Robotics and Automation (ICRA).

[16]  Klaus C. J. Dietmayer,et al.  Tracking Multiple Vehicles Using a Variational Radar Model , 2017, IEEE Transactions on Intelligent Transportation Systems.