Efficient L-shape fitting for vehicle detection using laser scanners

The detection of surrounding vehicles is an essential task in autonomous driving, which has been drawing enormous attention recently. When using laser scanners, L-Shape fitting is a key step for model-based vehicle detection and tracking, which requires thorough investigation and comprehensive research. In this paper, we formulate the L-Shape fitting as an optimization problem. An efficient search based method is then proposed to find the optimal solution. Our method does not rely on laser scan sequence information and therefore supports convenient data fusion from multiple laser scanners; it is efficient and involves very few parameters for tuning; the approach is also flexible to suit various fitting demands with different fitting criteria. On-road experiments with production-grade laser scanners have demonstrated the effectiveness and robustness of our approach.

[1]  S.S. Blackman,et al.  Multiple hypothesis tracking for multiple target tracking , 2004, IEEE Aerospace and Electronic Systems Magazine.

[2]  Robert A. MacLachlan,et al.  Tracking of Moving Objects from a Moving Vehicle Using a Scanning Laser Rangefinder , 2006, 2006 IEEE Intelligent Transportation Systems Conference.

[3]  Marcelo H. Ang,et al.  Efficient L-shape fitting of laser scanner data for vehicle pose estimation , 2015, 2015 IEEE 7th International Conference on Cybernetics and Intelligent Systems (CIS) and IEEE Conference on Robotics, Automation and Mechatronics (RAM).

[4]  William Whittaker,et al.  Autonomous driving in urban environments: Boss and the Urban Challenge , 2008, J. Field Robotics.

[5]  Martial Hebert,et al.  Moving object detection with laser scanners , 2013, J. Field Robotics.

[6]  Ingemar J. Cox,et al.  An Efficient Implementation of Reid's Multiple Hypothesis Tracking Algorithm and Its Evaluation for the Purpose of Visual Tracking , 1996, IEEE Trans. Pattern Anal. Mach. Intell..

[7]  Sebastian Thrun,et al.  Towards fully autonomous driving: Systems and algorithms , 2011, 2011 IEEE Intelligent Vehicles Symposium (IV).

[8]  Ragunathan Rajkumar,et al.  Towards a viable autonomous driving research platform , 2013, 2013 IEEE Intelligent Vehicles Symposium (IV).

[9]  William Whittaker,et al.  Tartan Racing: A multi-modal approach to the DARPA Urban Challenge , 2007 .

[10]  Quanshi Zhang,et al.  Moving object classification using horizontal laser scan data , 2009, 2009 IEEE International Conference on Robotics and Automation.

[11]  John M. Dolan,et al.  Context-aware tracking of moving objects for distance keeping , 2015, 2015 IEEE Intelligent Vehicles Symposium (IV).

[12]  Andreas Geiger,et al.  Are we ready for autonomous driving? The KITTI vision benchmark suite , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[13]  Sebastian Thrun,et al.  Stanley: The robot that won the DARPA Grand Challenge , 2006, J. Field Robotics.

[14]  Dieter Fox,et al.  Knowledge Compilation Properties of Trees-of-BDDs, Revisited , 2009, IJCAI.

[15]  Paul E. Rybski,et al.  Obstacle Detection and Tracking for the Urban Challenge , 2009, IEEE Transactions on Intelligent Transportation Systems.

[16]  Mark E. Campbell,et al.  Efficient Unbiased Tracking of Multiple Dynamic Obstacles Under Large Viewpoint Changes , 2011, IEEE Transactions on Robotics.

[17]  Matthias Althoff,et al.  Model-Based Probabilistic Collision Detection in Autonomous Driving , 2009, IEEE Transactions on Intelligent Transportation Systems.

[18]  Wolfram Burgard,et al.  Improved Techniques for Grid Mapping With Rao-Blackwellized Particle Filters , 2007, IEEE Transactions on Robotics.

[19]  Milan Sonka,et al.  Image Processing, Analysis and Machine Vision , 1993, Springer US.