Efficient Robotic Grasp Learning by Demonstration

In this paper, we propose a Learning from Demonstration approach for robotic grasping with compliant arms. The compliance in the robot arm for safety often causes a problem in grasping. In our approach, we construct a recurrent neural network, given the estimation of the target object position and random initial joint angles of the robot arm, that can produce the whole trajectories for grasping the target object. In order to generate smooth and stable trajectories and to decrease the number of human demonstrations, we propose a data augmentation method to increase the training data and utilize the trajectory planning technique using cubic splines for smooth and stable trajectories. Specifically, the two arms of the robot are trained respectively, and a support vector machine is used to decide which arm needs to be used for grasping the target object. The evaluation results show that our recurrent model not only has a good prediction for the final joint configurations, but also generates smooth and stable trajectory. Moreover, the model is robust to the changes in the initial joint state which means that even though the initial joint configuration is affected by disturbances, the model can still generate trajectories leading to the final joint configurations for grasping the object. Finally, we tested the proposed learning method on the Pepper robot which can successfully grasp randomly placed object on the workbench. Compared to traditional methods which need to avoid singular configurations as well as to secure accurate localization, our method turns out to be robust and efficient and can be applied to cluttered environment.

[1]  Stefan Schaal,et al.  Learning and generalization of motor skills by learning from demonstration , 2009, 2009 IEEE International Conference on Robotics and Automation.

[2]  Darwin G. Caldwell,et al.  Handling of multiple constraints and motion alternatives in a robot programming by demonstration framework , 2009, 2009 9th IEEE-RAS International Conference on Humanoid Robots.

[3]  Giovanni De Magistris,et al.  Transfer Learning from Synthetic to Real Images Using Variational Autoencoders for Precise Position Detection , 2018, 2018 25th IEEE International Conference on Image Processing (ICIP).

[4]  Kenneth Y. Goldberg,et al.  Cloud-based robot grasping with the google object recognition engine , 2013, 2013 IEEE International Conference on Robotics and Automation.

[5]  Brett Browning,et al.  A survey of robot learning from demonstration , 2009, Robotics Auton. Syst..

[6]  Darwin G. Caldwell,et al.  Learning and Reproduction of Gestures by Imitation , 2010, IEEE Robotics & Automation Magazine.

[7]  Giovanni De Magistris,et al.  Transfer learning from synthetic to real images using variational autoencoders for robotic applications , 2017, ArXiv.

[8]  Giovanni De Magistris,et al.  Teaching a Robot Pick and Place Task using Recurrent Neural Network , 2016 .

[9]  Aude Billard,et al.  Task Parameterization Using Continuous Constraints Extracted From Human Demonstrations , 2015, IEEE Transactions on Robotics.

[10]  Aude Billard,et al.  On Learning, Representing, and Generalizing a Task in a Humanoid Robot , 2007, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[11]  Maya Cakmak,et al.  Robot Programming by Demonstration with Crowdsourced Action Fixes , 2014, HCOMP.

[12]  Yoshua Bengio,et al.  Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling , 2014, ArXiv.

[13]  Sergey Levine,et al.  Deep spatial autoencoders for visuomotor learning , 2015, 2016 IEEE International Conference on Robotics and Automation (ICRA).

[14]  Ales Leonardis,et al.  One-shot learning and generation of dexterous grasps for novel objects , 2016, Int. J. Robotics Res..

[15]  Michael Beetz,et al.  Learning models for constraint-based motion parameterization from interactive physics-based simulation , 2016, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[16]  Odest Chadwicke Jenkins,et al.  Human and robot perception in large-scale learning from demonstration , 2011, 2011 6th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[17]  Wolfram Burgard,et al.  The limits and potentials of deep learning for robotics , 2018, Int. J. Robotics Res..

[18]  Rouhollah Rahmatizadeh,et al.  Vision-Based Multi-Task Manipulation for Inexpensive Robots Using End-to-End Learning from Demonstration , 2017, 2018 IEEE International Conference on Robotics and Automation (ICRA).