Predict Robot Grasp Outcomes based on Multi-Modal Information

In the service robot application scenario, the stable grasp requires careful balancing the contact forces and the property of the manipulation objects, such as shape, weight. Deducing whether a particular grasp would be stable from indirect measurements, such as vision, is therefore quite challenging, and direct sensing of contacts through tactile sensor provides an appealing avenue toward more successful and consistent robotic grasping. Other than this, an object's shape and weight would also decide whether to grasping stabilize or not. In this work, we investigate the question of whether tactile information and object intrinsic property aid in predicting grasp outcomes within a multi-modal sensing framework that combines vision, tactile and object intrinsic property. To that end, we collected more than 2550 grasping trials using a 3-finger robot hand which mounted with multiple tactile sensors. We evaluated our multi-modal deep neural network models to directly predict grasp stability from either modality individually or multimodal modalities. Our experimental results indicate the visual combination of tactile readings and intrinsic properties of the object significantly improve grasping prediction performance.

[1]  Gaurav S. Sukhatme,et al.  BiGS: BioTac Grasp Stability Dataset , 2016 .

[2]  Xinyu Liu,et al.  Dex-Net 2.0: Deep Learning to Plan Robust Grasps with Synthetic Point Clouds and Analytic Grasp Metrics , 2017, Robotics: Science and Systems.

[3]  Danica Kragic,et al.  Data-Driven Grasp Synthesis—A Survey , 2013, IEEE Transactions on Robotics.

[4]  Pat Langley,et al.  Estimating Continuous Distributions in Bayesian Classifiers , 1995, UAI.

[5]  Rüdiger Dillmann,et al.  The KIT object models database: An object model database for object recognition, localization and manipulation in service robotics , 2012, Int. J. Robotics Res..

[6]  Fuchun Sun,et al.  Grasp stability prediction using tactile information , 2017, 2017 2nd International Conference on Advanced Robotics and Mechatronics (ICARM).

[7]  Jan Peters,et al.  Stabilizing novel objects by learning to predict tactile slip , 2015, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[8]  Alberto Rodriguez,et al.  From caging to grasping , 2011, Int. J. Robotics Res..

[9]  Martin V. Butz,et al.  Self-supervised regrasping using spatio-temporal tactile features and reinforcement learning , 2016, IROS 2016.

[10]  Di Guo,et al.  Deep vision networks for real-time robotic grasp detection , 2016 .

[11]  David V. Gealy,et al.  Supplementary File for “Dex-Net 3.0: Computing Robust Robot Suction Grasp Targets in Point Clouds using a New Analytic Model and Deep Learning” , 2017 .

[12]  Sergey Levine,et al.  Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection , 2016, Int. J. Robotics Res..

[13]  Mark R. Cutkosky,et al.  Slip classification for dynamic tactile array sensors , 2016, Int. J. Robotics Res..

[14]  Naoki Kawakami,et al.  GelForce: a vision-based traction field computer interface , 2005, CHI Extended Abstracts.

[15]  Andreas Christmann,et al.  Support vector machines , 2008, Data Mining and Knowledge Discovery Handbook.

[16]  Matei T. Ciocarlie,et al.  The Columbia grasp database , 2009, 2009 IEEE International Conference on Robotics and Automation.

[17]  Hussein A. Abdullah,et al.  A Slip Detection and Correction Strategy for Precision Robot Grasping , 2016, IEEE/ASME Transactions on Mechatronics.

[18]  Ken Goldberg,et al.  Deep Imitation Learning for Complex Manipulation Tasks from Virtual Reality Teleoperation , 2017, ICRA.

[19]  Andrew Owens,et al.  The Feeling of Success: Does Touch Sensing Help Predict Grasp Outcomes? , 2017, CoRL.

[20]  P. Abbeel,et al.  Benchmarking in Manipulation Research , 2015 .

[21]  Antonio Bicchi,et al.  Integrated Tactile Sensing for Gripper Fingers , 1988 .

[22]  Jianhua Li,et al.  Slip Detection with Combined Tactile and Visual Information , 2018, 2018 IEEE International Conference on Robotics and Automation (ICRA).

[23]  N. Altman An Introduction to Kernel and Nearest-Neighbor Nonparametric Regression , 1992 .

[24]  R. Dillmann,et al.  Learning continuous grasp stability for a humanoid robot hand based on tactile sensing , 2012, 2012 4th IEEE RAS & EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob).

[25]  Hiroyuki Nakamoto,et al.  Slip detection with multi-axis force/torque sensor in universal robot hand , 2012 .

[26]  Sachin Chitta,et al.  Human-Inspired Robotic Grasp Control With Tactile Sensing , 2011, IEEE Transactions on Robotics.

[27]  Mathieu Aubry,et al.  Dex-Net 1.0: A cloud-based network of 3D objects for robust grasp planning using a Multi-Armed Bandit model with correlated rewards , 2016, 2016 IEEE International Conference on Robotics and Automation (ICRA).

[28]  Peter K. Allen,et al.  Data-driven grasping , 2011, Auton. Robots.

[29]  Jimmy A. Jørgensen,et al.  Assessing Grasp Stability Based on Learning and Haptic Data , 2011, IEEE Transactions on Robotics.

[30]  Xinyu Liu,et al.  Dex-Net 3.0: Computing Robust Robot Suction Grasp Targets in Point Clouds using a New Analytic Model and Deep Learning , 2017, ArXiv.

[31]  Sergey Levine,et al.  Time-Contrastive Networks: Self-Supervised Learning from Video , 2017, 2018 IEEE International Conference on Robotics and Automation (ICRA).

[32]  Abhinav Gupta,et al.  Supersizing self-supervision: Learning to grasp from 50K tries and 700 robot hours , 2015, 2016 IEEE International Conference on Robotics and Automation (ICRA).

[33]  Samy Bengio,et al.  Large Scale Online Learning of Image Similarity through Ranking , 2009, IbPRIA.