Segmentation of dense range information in complex urban scenes

In this paper, an algorithm to segment 3D points in dense range maps generated from the fusion of a single optical camera and a multiple emitter/detector laser range finder is presented. The camera image and laser range data are fused using a Markov Random Field to estimate a 3D point corresponding to each image pixel. The textured 3D dense point cloud is segmented based on evidence of a boundary between regions of the textured point cloud. Clusters are discriminated based on Euclidean distance, pixel intensity and estimated surface normal using a fast, deterministic and near linear time segmentation algorithm. The algorithm is demonstrated on data collected with the Cornell University DARPA Urban Challenge vehicle. Performance of the proposed dense segmentation routine is evaluated in a complex urban environment and compared to segmentation of the sparse point cloud. Results demonstrate the effectiveness of the dense segmentation algorithm to avoid over-segmentation better than incorporating color and surface normal data in the sparse point cloud.

[1]  Ramesh C. Jain,et al.  Segmentation through Variable-Order Surface Fitting , 1988, IEEE Trans. Pattern Anal. Mach. Intell..

[2]  Jitendra Malik,et al.  A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics , 2001, Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001.

[3]  Daniel P. Huttenlocher,et al.  Efficient Graph-Based Image Segmentation , 2004, International Journal of Computer Vision.

[4]  Sebastian Thrun,et al.  An Application of Markov Random Fields to Range Sensing , 2005, NIPS.

[5]  Roland Siegwart,et al.  Results on Range Image Segmentation for Service Robots , 2006, Fourth IEEE International Conference on Computer Vision Systems (ICVS'06).

[6]  Nico Blodow,et al.  Towards 3D object maps for autonomous household robots , 2007, 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[7]  Rudolph Triebel,et al.  Non-Iterative Vision-Based Interpolation of 3D Laser Scans , 2007 .

[8]  Dirk Wollherr,et al.  A clustering method for efficient segmentation of 3D laser data , 2008, 2008 IEEE International Conference on Robotics and Automation.

[9]  David Suter,et al.  Multi-scale Conditional Random Fields for over-segmented irregular 3D point clouds classification , 2008, 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops.

[10]  Andrew Y. Ng,et al.  Integrating Visual and Range Data for Robotic Object Detection , 2008, ECCV 2008.

[11]  Hong Zhang,et al.  An evaluation metric for image segmentation of multiple objects , 2009, Image Vis. Comput..

[12]  Martin Buss,et al.  Comparison of surface normal estimation methods for range sensing applications , 2009, 2009 IEEE International Conference on Robotics and Automation.

[13]  Paul Newman,et al.  Image and Sparse Laser Fusion for Dense Scene Reconstruction , 2009, FSR.

[14]  Ephrahim Garcia,et al.  Team Cornell's Skynet: Robust perception and planning in an urban environment , 2008, J. Field Robotics.

[15]  Luke Fletcher,et al.  The MIT - Cornell Collision and Why It Happened , 2009, The DARPA Urban Challenge.

[16]  Mark E. Campbell,et al.  Efficient Unbiased Tracking of Multiple Dynamic Obstacles Under Large Viewpoint Changes , 2011, IEEE Transactions on Robotics.