A multi-sensor fusion system for moving object detection and tracking in urban driving environments

A self-driving car, to be deployed in real-world driving environments, must be capable of reliably detecting and effectively tracking of nearby moving objects. This paper presents our new, moving object detection and tracking system that extends and improves our earlier system used for the 2007 DARPA Urban Challenge. We revised our earlier motion and observation models for active sensors (i.e., radars and LIDARs) and introduced a vision sensor. In the new system, the vision module detects pedestrians, bicyclists, and vehicles to generate corresponding vision targets. Our system utilizes this visual recognition information to improve a tracking model selection, data association, and movement classification of our earlier system. Through the test using the data log of actual driving, we demonstrate the improvement and performance gain of our new tracking system.

[1]  Luke Fletcher,et al.  A perception‐driven autonomous urban vehicle , 2008, J. Field Robotics.

[2]  Paul E. Rybski,et al.  Real-time pedestrian detection with deformable part models , 2012, 2012 IEEE Intelligent Vehicles Symposium.

[3]  Roland Siegwart,et al.  Human detection using multimodal and multidimensional features , 2008, 2008 IEEE International Conference on Robotics and Automation.

[4]  Cristiano Premebida,et al.  Performance of laser and radar ranging devices in adverse environmental conditions , 2009 .

[5]  Yuanqing Xia,et al.  Multi-Sensor Data Fusion , 2014 .

[6]  Sebastian Thrun,et al.  Junior: The Stanford entry in the Urban Challenge , 2008, J. Field Robotics.

[7]  Louis-François Pau,et al.  Sensor data fusion , 1988, J. Intell. Robotic Syst..

[8]  James Llinas,et al.  Multisensor Data Fusion , 1990 .

[9]  Charles E. Thorpe,et al.  Perception for collision avoidance and autonomous driving , 2003 .

[10]  Ephrahim Garcia,et al.  Team Cornell's Skynet: Robust perception and planning in an urban environment , 2008, J. Field Robotics.

[11]  Paul E. Rybski,et al.  Vision-based 3D bicycle tracking using deformable part model and Interacting Multiple Model filter , 2011, 2011 IEEE International Conference on Robotics and Automation.

[12]  Luke Fletcher,et al.  A perception‐driven autonomous urban vehicle , 2008, J. Field Robotics.

[13]  K.C.J. Dietmayer,et al.  IMM object tracking for high dynamic driving maneuvers , 2004, IEEE Intelligent Vehicles Symposium, 2004.

[14]  Christoph Stiller,et al.  Multisensor obstacle detection and tracking , 2000, Image Vis. Comput..

[15]  David A. McAllester,et al.  Object Detection with Discriminatively Trained Part Based Models , 2010, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[16]  M. Mahlisch,et al.  Sensorfusion Using Spatio-Temporal Aligned Video and Lidar for Improved Vehicle Detection , 2006, 2006 IEEE Intelligent Vehicles Symposium.

[17]  Thiagalingam Kirubarajan,et al.  Estimation with Applications to Tracking and Navigation , 2001 .

[18]  Cristiano Premebida,et al.  LIDAR and vision‐based pedestrian detection system , 2009, J. Field Robotics.

[19]  Martial Hebert,et al.  Moving object detection with laser scanners , 2013, J. Field Robotics.

[20]  Ragunathan Rajkumar,et al.  Towards a viable autonomous driving research platform , 2013, 2013 IEEE Intelligent Vehicles Symposium (IV).

[21]  P. Peixoto,et al.  Tracking and Classification of Dynamic Obstacles Using Laser Range Finder and Vision , 2006 .

[22]  Paul E. Rybski,et al.  Obstacle Detection and Tracking for the Urban Challenge , 2009, IEEE Transactions on Intelligent Transportation Systems.