A unified framework for grasping and shape acquisition via pretouch sensing

This paper presents a unified framework to enable automatic exploration with a pretouch sensor to reduce object shape uncertainty before grasping. The robot starts with only the incomplete object shape data it acquires from a Kinect depth sensor-it has no model of the object. Combining the Kinect pointcloud with prior probability distributions for occlusion and transparency, it makes inferences about portions of the object that the Kinect was not able to observe. Operating on the inferred shape of the object, an iterative grasp replanning and exploration algorithm decides when further exploration is required, and where to explore in the scene using the pretouch sensor. The information gathered by the exploration action is added directly to the robot's environment representation and is considered automatically in the next grasp planning iteration.

[1]  Michael Beetz,et al.  Improving robot manipulation through fingertip perception , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[2]  John Amanatides,et al.  A Fast Voxel Traversal Algorithm for Ray Tracing , 1987, Eurographics.

[3]  Wolfram Burgard,et al.  OctoMap : A Probabilistic , Flexible , and Compact 3 D Map Representation for Robotic Systems , 2010 .

[4]  Nicholas Roy,et al.  Probabilistic Models of Object Geometry for Grasp Planning , 2008, Robotics: Science and Systems.

[5]  Robert Platt,et al.  Using Bayesian Filtering to Localize Flexible Materials During Manipulation , 2011, IEEE Transactions on Robotics.

[6]  Oussama Khatib,et al.  Global Localization of Objects via Touch , 2011, IEEE Transactions on Robotics.

[7]  Joshua R. Smith,et al.  An Electric Field Pretouch system for grasping and co-manipulation , 2010, 2010 IEEE International Conference on Robotics and Automation.

[8]  Joshua R. Smith,et al.  Electric field imaging pretouch for robotic graspers , 2007, 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[9]  Siddhartha S. Srinivasa,et al.  A Framework for Push-Grasping in Clutter , 2011, Robotics: Science and Systems.

[10]  Siddhartha S. Srinivasa,et al.  Efficient touch based localization through submodularity , 2012, 2013 IEEE International Conference on Robotics and Automation.

[11]  Matei T. Ciocarlie,et al.  Contact-reactive grasping of objects with partial shape information , 2010, 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[12]  Joshua R. Smith,et al.  Electric Field Servoing for robotic manipulation , 2008, 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[13]  Radu Bogdan Rusu,et al.  3D is here: Point Cloud Library (PCL) , 2011, 2011 IEEE International Conference on Robotics and Automation.

[14]  Geoffrey A. Hollinger,et al.  Uncertainty-driven view planning for underwater inspection , 2012, 2012 IEEE International Conference on Robotics and Automation.

[15]  Ashutosh Saxena,et al.  Reactive grasping using optical proximity sensors , 2009, 2009 IEEE International Conference on Robotics and Automation.

[16]  Joshua R. Smith,et al.  Seashell effect pretouch sensing for robotic grasping , 2012, 2012 IEEE International Conference on Robotics and Automation.

[17]  Peter K. Allen,et al.  Blind grasping: Stable robotic grasping using tactile feedback and hand kinematics , 2011, 2011 IEEE International Conference on Robotics and Automation.

[18]  Leslie Pack Kaelbling,et al.  Non-Gaussian belief space planning: Correctness and complexity , 2012, 2012 IEEE International Conference on Robotics and Automation.

[19]  Robert W. Platt,et al.  Inferring hand-object configuration directly from tactile data , 2010 .

[20]  Gaurav S. Sukhatme,et al.  A probabilistic framework for next best view estimation in a cluttered environment , 2014, J. Vis. Commun. Image Represent..

[21]  Leslie Pack Kaelbling,et al.  Grasping POMDPs , 2007, Proceedings 2007 IEEE International Conference on Robotics and Automation.