Towards functional labeling of utility vehicle point clouds for humanoid driving

We present preliminary work on analyzing 3-D point clouds of a small utility vehicle for purposes of humanoid robot driving. The scope of this work is limited to a subset of ingress-related tasks including stepping up into the vehicle and grasping the steering wheel. First, we describe how partial point clouds are acquired from different perspectives using sensors including a stereo camera and a tilting laser range-finder. For finer detail and a larger model than one sensor view alone can capture, a Kinect Fusion [1]-like algorithm is used to integrate the stereo point clouds as the sensor head is moved around the vehicle. Second, we discuss how individual sensor views can be registered to the overall vehicle model to provide context, and present methods to estimate several geometric parameters critical to motion planning: (1) the floor height and boundaries defined by the seat and the dashboard, and (2) the steering wheel pose and dimensions. Results are compared using the different sensors, and the usefulness of the estimated quantities for motion planning is also demonstrated.

[1]  Nico Blodow,et al.  Model-based and learned semantic object labeling in 3D point cloud maps of kitchen environments , 2009, 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[2]  Maren Bennewitz,et al.  From 3D point clouds to climbing stairs: A comparison of plane segmentation approaches for humanoids , 2011, 2011 11th IEEE-RAS International Conference on Humanoid Robots.

[3]  Radu Bogdan Rusu,et al.  3D is here: Point Cloud Library (PCL) , 2011, 2011 IEEE International Conference on Robotics and Automation.

[4]  Morgan Quigley,et al.  ROS: an open-source Robot Operating System , 2009, ICRA 2009.

[5]  Jun-Ho Oh,et al.  A common interface for humanoid simulation and hardware , 2010, 2010 10th IEEE-RAS International Conference on Humanoid Robots.

[6]  Masayuki Inaba,et al.  Plane segment finder: algorithm, implementation and applications , 2001, Proceedings 2001 ICRA. IEEE International Conference on Robotics and Automation (Cat. No.01CH37164).

[7]  Michael Beetz,et al.  Laser-based perception for door and handle identification , 2009, 2009 International Conference on Advanced Robotics.

[8]  Zoltan-Csaba Marton,et al.  Hierarchical object geometric categorization and appearance classification for mobile manipulation , 2010, 2010 10th IEEE-RAS International Conference on Humanoid Robots.

[9]  Andrew W. Fitzgibbon,et al.  KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera , 2011, UIST.

[10]  William Whittaker,et al.  Autonomous driving in urban environments: Boss and the Urban Challenge , 2008, J. Field Robotics.

[11]  Avideh Zakhor,et al.  Planar 3D modeling of building interiors from point cloud data , 2012, 2012 19th IEEE International Conference on Image Processing.

[12]  Satoshi Kagami,et al.  Biped navigation in rough environments using on-board sensing , 2009, 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[13]  Thorsten Joachims,et al.  Labeling 3D scenes for Personal Assistant Robots , 2011, ArXiv.

[14]  Satoshi Kagami,et al.  Autonomous navigation of a humanoid robot over unknown rough terrain using a laser range sensor , 2012, Int. J. Robotics Res..

[15]  Masahiro Fujita,et al.  Stair climbing for humanoid robots using stereo vision , 2004, 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566).

[16]  John J. Leonard,et al.  Robust Tracking for Real-Time Dense RGB-D Mapping with Kintinuous , 2012 .

[17]  Dieter Fox,et al.  Detection-based object labeling in 3D scenes , 2012, 2012 IEEE International Conference on Robotics and Automation.

[18]  Sebastian Thrun,et al.  Stanley: The robot that won the DARPA Grand Challenge , 2006, J. Field Robotics.