Multi-Target Track-to-Track Fusion Based on Permutation Matrix Track Association

This paper proposes the Permutation Matrix Track Association (PMTA) algorithm to support track-to-track, multi-sensor data fusion for multiple targets in an autonomous driving system. In this system, measurement data from different sensor modalities (LIDAR, radar, and vision) is processed by object trackers operating on each sensor modality independently to create the tracks of the objects. The proposed approach fuses the object track lists from each tracker, first by associating the tracks within each track list, followed by a state estimation (filtering) step. The eventual output is the unified tracks of the objects provided for further autonomous driving processing, such as path and motion planning. The permutation matrix track association (PMTA) algorithm considers both spatial and temporal information to associate object tracks from different sensor modalities. Experimental results show that the proposed approach improves not only the performance of the multipletarget track-to-track fusion, but also stability and robustness in the resulting speed control and decision making in the autonomous driving system.

[1]  Yaakov Bar-Shalom,et al.  The optimal algorithm for asynchronous track-to-track fusion , 2010, Defense + Commercial Sensing.

[2]  Ephrahim Garcia,et al.  Team Cornell's Skynet: Robust perception and planning in an urban environment , 2008, J. Field Robotics.

[3]  M. Mahlisch,et al.  Sensorfusion Using Spatio-Temporal Aligned Video and Lidar for Improved Vehicle Detection , 2006, 2006 IEEE Intelligent Vehicles Symposium.

[4]  Silvio Savarese,et al.  A Probabilistic Framework for Real-time 3D Segmentation using Spatial, Temporal, and Semantic Cues , 2016, Robotics: Science and Systems.

[5]  Hugh F. Durrant-Whyte,et al.  Multisensor Data Fusion , 2016, Springer Handbook of Robotics, 2nd Ed..

[6]  Sebastian Thrun,et al.  Towards fully autonomous driving: Systems and algorithms , 2011, 2011 IEEE Intelligent Vehicles Symposium (IV).

[7]  Wen-an Zhang,et al.  Hybrid Sequential Fusion Estimation for Asynchronous Sensor Network-Based Target Tracking , 2017, IEEE Transactions on Control Systems Technology.

[8]  Sebastian Thrun,et al.  Junior: The Stanford entry in the Urban Challenge , 2008, J. Field Robotics.

[9]  Nobuhide Kamata,et al.  Classifying and Passing 3D Obstacles for Autonomous Driving , 2015, 2015 IEEE 18th International Conference on Intelligent Transportation Systems.

[10]  Luke Fletcher,et al.  A perception-driven autonomous urban vehicle , 2008 .

[11]  Christoph Stiller,et al.  Multisensor obstacle detection and tracking , 2000, Image Vis. Comput..

[12]  Torsten Bertram,et al.  Track-to-Track Fusion With Asynchronous Sensors Using Information Matrix Fusion for Surround Environment Perception , 2012, IEEE Transactions on Intelligent Transportation Systems.

[13]  Jeffrey K. Uhlmann,et al.  General Decentralized Data Fusion With Covariance Intersection (CI) , 2001 .

[14]  Paul E. Rybski,et al.  Obstacle Detection and Tracking for the Urban Challenge , 2009, IEEE Transactions on Intelligent Transportation Systems.

[15]  Evangeline Pollard,et al.  Track-to-track fusion using split covariance intersection filter-information matrix filter (SCIF-IMF) for vehicle surrounding environment perception , 2013, 16th International IEEE Conference on Intelligent Transportation Systems (ITSC 2013).

[16]  J. Liu,et al.  Multitarget Tracking in Distributed Sensor Networks , 2007, IEEE Signal Processing Magazine.

[17]  Fakhri Karray,et al.  Multisensor data fusion: A review of the state-of-the-art , 2013, Inf. Fusion.

[18]  Véronique Berge-Cherfaoui,et al.  A track-to-track association method for automotive perception systems , 2012, 2012 IEEE Intelligent Vehicles Symposium.

[19]  William Whittaker,et al.  Autonomous driving in urban environments: Boss and the Urban Challenge , 2008, J. Field Robotics.

[20]  R. Mobus,et al.  Multi-target multi-object tracking, sensor fusion of radar and infrared , 2004, IEEE Intelligent Vehicles Symposium, 2004.

[21]  Fabio Tango,et al.  Object perception for intelligent vehicle applications: A multi-sensor fusion approach , 2014, 2014 IEEE Intelligent Vehicles Symposium Proceedings.

[22]  Jenq-Neng Hwang,et al.  Tracking across nonoverlapping cameras based on the unsupervised learning of camera link models , 2012, 2012 Sixth International Conference on Distributed Smart Cameras (ICDSC).

[23]  B. V. K. Vijaya Kumar,et al.  A multi-sensor fusion system for moving object detection and tracking in urban driving environments , 2014, 2014 IEEE International Conference on Robotics and Automation (ICRA).

[24]  Anand Rangarajan,et al.  A new point matching algorithm for non-rigid registration , 2003, Comput. Vis. Image Underst..

[25]  Zdenek Kalal,et al.  Tracking-Learning-Detection , 2012, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[26]  Cristiano Premebida,et al.  LIDAR and vision‐based pedestrian detection system , 2009, J. Field Robotics.