Learning end-effector orientations for novel object grasping tasks

We present a new method to calculate valid end-effector orientations for grasping tasks. A fast and accurate three-layered hierarchical supervised machine learning framework is developed. The algorithm is trained with a human-in-the-loop in a learn-by-demonstration procedure where the robot is shown a set of valid end-effector rotations. Learning is then achieved through a multi-class support vector machine, orthogonal distance regression, and nearest neighbor searches. We provide results acquired both offline and on a humanoid torso and demonstrate the algorithm generalizes well to objects outside the training data.

[1]  Stefano Carpin,et al.  Efficient grasping of novel objects through dimensionality reduction , 2010, 2010 IEEE International Conference on Robotics and Automation.

[2]  Martti Juhola,et al.  Informal identification of outliers in medical data , 2000 .

[3]  G LoweDavid,et al.  Distinctive Image Features from Scale-Invariant Keypoints , 2004 .

[4]  Claire Dune,et al.  Vision-based grasping of unknown objects to improve disabled persons autonomy , 2008 .

[5]  Oliver Kroemer,et al.  Active learning using mean shift optimization for robot grasping , 2009, 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[6]  David G. Lowe,et al.  Object recognition from local scale-invariant features , 1999, Proceedings of the Seventh IEEE International Conference on Computer Vision.

[7]  Luc Van Gool,et al.  SURF: Speeded Up Robust Features , 2006, ECCV.

[8]  Ashutosh Saxena,et al.  Robotic Grasping of Novel Objects using Vision , 2008, Int. J. Robotics Res..

[9]  Kurt Hornik,et al.  Support Vector Machines in R , 2006 .

[10]  Matei T. Ciocarlie,et al.  Hand Posture Subspaces for Dexterous Robotic Grasping , 2009, Int. J. Robotics Res..