Robot to Human Object Handover using Vision and Joint Torque Sensor Modalities

We present a robot-to-human object handover algorithm and implement it on a 7-DOF arm equipped with a 3-finger mechanical hand. The system performs a fully autonomous and robust object handover to a human receiver in real-time. Our algorithm relies on two complementary sensor modalities: joint torque sensors on the arm and an eye-in-hand RGB-D camera for sensor feedback. Our approach is entirely implicit, i.e., there is no explicit communication between the robot and the human receiver. Information obtained via the aforementioned sensor modalities is used as inputs to their related deep neural networks. While the torque sensor network detects the human receiver's"intention"such as: pull, hold, or bump, the vision sensor network detects if the receiver's fingers have wrapped around the object. Networks' outputs are then fused, based on which a decision is made to either release the object or not. Despite substantive challenges in sensor feedback synchronization, object, and human hand detection, our system achieves robust robot-to-human handover with 98\% accuracy in our preliminary real experiments using human receivers.

[1]  Francesco Pierri,et al.  Vision based robot-to-robot object handover , 2021, 2021 20th International Conference on Advanced Robotics (ICAR).

[2]  Akansel Cosgun,et al.  Object Handovers: A Review for Robotics , 2020, IEEE Transactions on Robotics.

[3]  Dieter Fox,et al.  Human Grasp Classification for Reactive Human-to-Robot Handovers , 2020, 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[4]  Kamal Gupta,et al.  Identifying Multiple Interaction Events from Tactile Data during Robot-Human Object Transfer , 2019, 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN).

[5]  Thomas Brox,et al.  FreiHAND: A Dataset for Markerless Capture of Hand Pose and Shape From Single RGB Images , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[6]  V. Lepetit,et al.  HOnnotate: A Method for 3D Annotation of Hand and Object Poses , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[7]  Michael J. Black,et al.  Learning Joint Reconstruction of Hands and Manipulated Objects , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[8]  Christian Cipriani,et al.  Humans adjust their grip force when passing an object according to the observed speed of the partner’s reaching out movement , 2018, Experimental Brain Research.

[9]  D. Berenson,et al.  Unsupervised early prediction of human reaching for human–robot collaboration in shared workspaces , 2017, Autonomous Robots.

[10]  T. Martin McGinnity,et al.  Reliable object handover through tactile force sensing and effort control in the Shadow Robot hand , 2017, 2017 IEEE International Conference on Robotics and Automation (ICRA).

[11]  Milos Zefran,et al.  A fail-safe object handover controller , 2016, 2016 IEEE International Conference on Robotics and Automation (ICRA).

[12]  Wei Liu,et al.  SSD: Single Shot MultiBox Detector , 2015, ECCV.

[13]  Dumitru Erhan,et al.  Scalable, High-Quality Object Detection , 2014, ArXiv.

[14]  Satoshi Endo,et al.  Implementation and experimental validation of Dynamic Movement Primitives for object handover , 2014, 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[15]  Satoshi Endo,et al.  Experimental testing of the CogLaboration prototype system for fluent Human-Robot object handover interactions , 2014, The 23rd IEEE International Symposium on Robot and Human Interactive Communication.

[16]  Anthony G. Pipe,et al.  Joint action understanding improves robot-to-human object handover , 2013, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[17]  Elizabeth A. Croft,et al.  A human-inspired object handover controller , 2013, Int. J. Robotics Res..

[18]  Takayuki Kanda,et al.  A Model of Distributional Handing Interaction for a Mobile Robot , 2013, Robotics: Science and Systems.

[19]  Siddhartha S. Srinivasa,et al.  Toward seamless human-robot handovers , 2013, Journal of Human-Robot Interaction.

[20]  Elizabeth A. Croft,et al.  Grip forces and load forces in handovers: Implications for designing human-robot handover controllers , 2012, 2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[21]  Tiffany L. Chen,et al.  Hand it over or set it down: A user study of object delivery with an assistive mobile manipulator , 2009, RO-MAN 2009 - The 18th IEEE International Symposium on Robot and Human Interactive Communication.

[22]  Jun Nakanishi,et al.  Learning Movement Primitives , 2005, ISRR.

[23]  Jun Nakanishi,et al.  Movement imitation with nonlinear dynamical systems in humanoid robots , 2002, Proceedings 2002 IEEE International Conference on Robotics and Automation (Cat. No.02CH37292).

[24]  Kris Hauser,et al.  Predicting Object Transfer Position and Timing in Human-robot Handover Tasks , 2015 .