Three-Dimensional Real-Time Object Perception Based on a 16-Beam LiDAR for an Autonomous Driving Car

Object perception is essential for autonomous driving applications in urban environment. A 64-beam LiDAR is a widely-used solution in this field, but its high price has prevented it from broader applications of autonomous driving technology. An alternative solution is to adopt a 16-beam LiDAR or multiple 16-beam LiDARs. However, 16-beam LiDAR obtains relative sparse data that makes object perception more challenging. In this paper, a new perception method is proposed to tackle problems caused by sparse data obtained from a 16- beam LiDAR. First, a segmentation method is proposed based on 2D grid image where a free space constraint is employed to reduce unreasonable image dilation and some segments are merged based on prior knowledge. Then, selective features of bounding box are employed in association process for a more accurate result given the sparse data. The proposed method is evaluated on an autonomous driving car in real urban scenarios. The results show that segmentation error can be as low as 7.7% with the free space constraint and prior knowledge, and absolute tracking error and the overall classification accuracy are 0.44 m/s and 93.33 % respectively.

[1]  Seung-Woo Seo,et al.  Real-Time and Accurate Segmentation of 3-D Point Clouds Based on Gaussian Process Regression , 2017, IEEE Transactions on Intelligent Transportation Systems.

[2]  Takashi Naito,et al.  Pedestrian recognition using high-definition LIDAR , 2011, 2011 IEEE Intelligent Vehicles Symposium (IV).

[3]  Markus Maurer,et al.  Multi-Target Tracking using a 3D-Lidar sensor for autonomous vehicles , 2013, 16th International IEEE Conference on Intelligent Transportation Systems (ITSC 2013).

[4]  William Whittaker,et al.  Autonomous driving in urban environments: Boss and the Urban Challenge , 2008, J. Field Robotics.

[5]  Christoph Stiller,et al.  Segmentation of 3D lidar data in non-flat urban environments using a local convexity criterion , 2009, 2009 IEEE Intelligent Vehicles Symposium.

[6]  Sebastian Thrun,et al.  Precision tracking with sparse 3D and dense color 2D data , 2013, 2013 IEEE International Conference on Robotics and Automation.

[7]  Michael R. James Learning a Real-Time 3 D Point Cloud Obstacle Discriminator via Bootstrapping Michael Samples and , 2010 .

[8]  Olivier Aycard,et al.  Detection, classification and tracking of moving objects in a 3D environment , 2012, 2012 IEEE Intelligent Vehicles Symposium.

[9]  Sebastian Thrun,et al.  Towards 3D object recognition via classification of arbitrary object tracks , 2011, 2011 IEEE International Conference on Robotics and Automation.

[10]  Samuel S. Blackman,et al.  Multiple-Target Tracking with Radar Applications , 1986 .

[11]  Yuri Owechko,et al.  On Real-Time LIDAR Data Segmentation and Classification , 2013 .

[12]  Paul Newman,et al.  What could move? Finding cars, pedestrians and bicyclists in 3D laser data , 2012, 2012 IEEE International Conference on Robotics and Automation.

[13]  Christoph Stiller,et al.  Joint self-localization and tracking of generic objects in 3D range data , 2013, 2013 IEEE International Conference on Robotics and Automation.

[14]  Michael Himmelsbach,et al.  LIDAR-based 3D Object Perception , 2008 .

[15]  Silvio Savarese,et al.  A Probabilistic Framework for Real-time 3D Segmentation using Spatial, Temporal, and Semantic Cues , 2016, Robotics: Science and Systems.

[16]  Christoph Mertz,et al.  Pedestrian Detection and Tracking Using Three-dimensional LADAR Data , 2010, Int. J. Robotics Res..