Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection

We describe a learning-based approach to hand-eye coordination for robotic grasping from monocular images. To learn hand-eye coordination for grasping, we trained a large convolutional neural network to predict the probability that task-space motion of the gripper will result in successful grasps, using only monocular camera images independent of camera calibration or the current robot pose. This requires the network to observe the spatial relationship between the gripper and objects in the scene, thus learning hand-eye coordination. We then use this network to servo the gripper in real time to achieve successful grasps. We describe two large-scale experiments that we conducted on two separate robotic platforms. In the first experiment, about 800,000 grasp attempts were collected over the course of two months, using between 6 and 14 robotic manipulators at any given time, with differences in camera placement and gripper wear and tear. In the second experiment, we used a different robotic platform and 8 robots to collect a dataset consisting of over 900,000 grasp attempts. The second robotic platform was used to test transfer between robots, and the degree to which data from a different set of robots can be used to aid learning. Our experimental results demonstrate that our approach achieves effective real-time control, can successfully grasp novel objects, and corrects mistakes by continuous servoing. Our transfer experiment also illustrates that data from different robots can be combined to learn more reliable and effective grasping.

[1]  Patrick Rives,et al.  A new approach to visual servoing in robotics , 1992, IEEE Trans. Robotics Autom..

[2]  Minoru Asada,et al.  Versatile visual servoing without knowledge of true Jacobian , 1994, Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS'94).

[3]  Peter K. Allen,et al.  Active, uncalibrated visual servoing , 1994, Proceedings of the 1994 IEEE International Conference on Robotics and Automation.

[4]  Roberto Cipolla,et al.  Uncalibrated Visual Servoing , 1996, BMVC.

[5]  William J. Wilson,et al.  Relative end-effector control using Cartesian position based visual servoing , 1996, IEEE Trans. Robotics Autom..

[6]  Olac Fuentes,et al.  Experimental evaluation of uncalibrated visual servoing for precision manipulation , 1997, Proceedings of International Conference on Robotics and Automation.

[7]  Yoshua Bengio,et al.  Convolutional networks for images, speech, and time series , 1998 .

[8]  Masayuki Inaba,et al.  A Platform for Robotics Research Based on the Remote-Brained Robot Approach , 2000, Int. J. Robotics Res..

[9]  Danica Kragic,et al.  Survey on Visual Servoing for Manipulation , 2002 .

[10]  J. Saunders,et al.  Humans use continuous visual feedback from the hand to control fast reaching movements , 2003, Experimental Brain Research.

[11]  Dirk P. Kroese,et al.  The Cross-Entropy Method: A Unified Approach to Combinatorial Optimization, Monte-Carlo Simulation and Machine Learning , 2004 .

[12]  Dirk P. Kroese,et al.  The Cross Entropy Method: A Unified Approach To Combinatorial Optimization, Monte-carlo Simulation (Information Science and Statistics) , 2004 .

[13]  Dirk P. Kroese,et al.  Cross‐Entropy Method , 2011 .

[14]  Csaba Szepesvári,et al.  Fitted Q-iteration in continuous action-space MDPs , 2007, NIPS.

[15]  R. Johansson,et al.  Tactile Sensory Control of Object Manipulation in Humans , 2020, The Senses: A Comprehensive Reference.

[16]  Rüdiger Dillmann,et al.  Visual servoing for humanoid grasping and manipulation tasks , 2008, Humanoids 2008 - 8th IEEE-RAS International Conference on Humanoid Robots.

[17]  Matei T. Ciocarlie,et al.  Data-driven grasping with partial sensor data , 2009, 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[18]  Matei T. Ciocarlie,et al.  The Columbia grasp database , 2009, 2009 IEEE International Conference on Robotics and Automation.

[19]  Manuel Lopes,et al.  Learning grasping affordances from local visual descriptors , 2009, 2009 IEEE 8th International Conference on Development and Learning.

[20]  John Kenneth Salisbury,et al.  Using Near-Field Stereo Vision for Robotic Grasping in Cluttered Environments , 2010, ISER.

[21]  Oliver Kroemer,et al.  Learning grasp affordance densities , 2011, Paladyn J. Behav. Robotics.

[22]  Giulio Sandini,et al.  Autonomous Online Learning of Reaching Behavior in a humanoid Robot , 2012, Int. J. Humanoid Robotics.

[23]  Peter K. Allen,et al.  Pose error robust grasping from contact wrench space metrics , 2012, 2012 IEEE International Conference on Robotics and Automation.

[24]  Larry H. Matthies,et al.  End-to-end dexterous manipulation with deliberate interactive estimation , 2012, 2012 IEEE International Conference on Robotics and Automation.

[25]  Alberto Rodriguez,et al.  From caging to grasping , 2011, Int. J. Robotics Res..

[26]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[27]  Joel W. Burdick,et al.  Combined shape, appearance and silhouette for simultaneous manipulator and object tracking , 2012, 2012 IEEE International Conference on Robotics and Automation.

[28]  Ross A. Knepper,et al.  Herb 2.0: Lessons Learned From Developing a Mobile Manipulator for the Home , 2012, Proceedings of the IEEE.

[29]  Oliver Kroemer,et al.  Generalization of human grasping for multi-fingered robot hands , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[30]  Kenneth Y. Goldberg,et al.  Cloud-based robot grasping with the google object recognition engine , 2013, 2013 IEEE International Conference on Robotics and Automation.

[31]  Martin A. Riedmiller,et al.  Acquiring visual servoing reaching and grasping skills using neural reinforcement learning , 2013, The 2013 International Joint Conference on Neural Networks (IJCNN).

[32]  Éric Marchand,et al.  Photometric visual servoing for omnidirectional cameras , 2013, Auton. Robots.

[33]  Danica Kragic,et al.  Data-Driven Grasp Synthesis—A Survey , 2013, IEEE Transactions on Robotics.

[34]  Giulio Sandini,et al.  Autonomous online generation of a motor representation of the workspace for intelligent whole-body reaching , 2014, Robotics Auton. Syst..

[35]  Trevor Darrell,et al.  Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation , 2013, 2014 IEEE Conference on Computer Vision and Pattern Recognition.

[36]  Alexander Herzog,et al.  Learning of grasp selection based on shape-templates , 2014, Auton. Robots.

[37]  Honglak Lee,et al.  Deep learning for detecting robotic grasps , 2013, Int. J. Robotics Res..

[38]  Iasonas Kokkinos,et al.  Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs , 2014, ICLR.

[39]  Sergey Ioffe,et al.  Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.

[40]  Jeannette Bohg,et al.  Leveraging big data for grasp planning , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[41]  Pieter Abbeel,et al.  Image Object Label 3 D CAD Model Candidate Grasps Google Object Recognition Engine Google Cloud Storage Select Feasible Grasp with Highest Success Probability Pose EstimationCamera Robots Cloud 3 D Sensor , 2014 .

[42]  Vincent Lepetit,et al.  Learning descriptors for object recognition and 3D pose estimation , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[43]  Stefanie Tellex,et al.  Autonomously Acquiring Instance-Based Object Models from Experience , 2015, ISRR.

[44]  Martin A. Riedmiller,et al.  Embed to Control: A Locally Linear Latent Dynamics Model for Control from Raw Images , 2015, NIPS.

[45]  Shane Legg,et al.  Human-level control through deep reinforcement learning , 2015, Nature.

[46]  Joseph Redmon,et al.  Real-time grasp detection using convolutional neural networks , 2014, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[47]  Yuval Tassa,et al.  Continuous control with deep reinforcement learning , 2015, ICLR.

[48]  Oussama Khatib,et al.  Springer Handbook of Robotics , 2007, Springer Handbooks.

[49]  Kate Saenko,et al.  High precision grasp pose detection in dense clutter , 2016, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[50]  Sergey Levine,et al.  Learning Hand-Eye Coordination for Robotic Grasping with Large-Scale Data Collection , 2016, ISER.

[51]  Stefan Leutenegger,et al.  Deep learning a grasp function for grasping under gripper pose uncertainty , 2016, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[52]  Ales Leonardis,et al.  One-shot learning and generation of dexterous grasps for novel objects , 2016, Int. J. Robotics Res..

[53]  Sven Behnke,et al.  Focused online visual-motor coordination for a dual-arm robot manipulator , 2016, 2016 IEEE International Conference on Robotics and Automation (ICRA).

[54]  Marek Sewer Kopicki,et al.  Active vision for dexterous grasping of novel objects , 2016, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[55]  Alexandre Bernardino,et al.  Robotic Hand Pose Estimation Based on Stereo Vision and GPU-enabled Internal Graphical Simulation , 2016, J. Intell. Robotic Syst..

[56]  Sergey Levine,et al.  End-to-End Training of Deep Visuomotor Policies , 2015, J. Mach. Learn. Res..

[57]  Stefan Schaal,et al.  Robot arm pose estimation by pixel-wise regression of joint angles , 2016, 2016 IEEE International Conference on Robotics and Automation (ICRA).

[58]  Abhinav Gupta,et al.  Supersizing self-supervision: Learning to grasp from 50K tries and 700 robot hours , 2015, 2016 IEEE International Conference on Robotics and Automation (ICRA).

[59]  Mathieu Aubry,et al.  Dex-Net 1.0: A cloud-based network of 3D objects for robust grasp planning using a Multi-Armed Bandit model with correlated rewards , 2016, 2016 IEEE International Conference on Robotics and Automation (ICRA).