A vision-guided multi-robot cooperation framework for learning-by-demonstration and task reproduction

This paper presents a vision-based learning-by-demonstration approach for multi-robot manipulation. With this method, a vision system is involved in both the task demonstration and reproduction stages, and the speed and accuracy of the task reproduction are adapted according to the context of the demonstration. An expert first demonstrates how to use tools to perform a task, while the tool motion is observed using a vision system. The demonstrations are then encoded using a statistical model to generate a reference motion trajectory. Equipped with the same tools and the learned model, the robot is guided by vision to reproduce the task. The task performance was evaluated in terms of both accuracy and speed. However, simply increasing the robot's speed could decrease the reproduction accuracy. To this end, a dual-rate Kalman filter is employed to compensate for latency between the robot and vision system. More importantly, the robot speed is adapted according to the learned motion model. We demonstrate the effectiveness of our approach by performing two tasks: a trajectory reproduction task and a bimanual sewing task. We show that using our vision-based approach, the robots can conduct effective learning by demonstrations and perform accurate and fast task reproduction. The proposed approach is generalisable to other manipulation tasks, where bimanual or multi-robot cooperation is required.

[1]  Svemir Popi,et al.  Light Weight Robot Arms – An overview , 2015 .

[2]  Mamoru Mitsuishi,et al.  Trajectory planning under different initial conditions for surgical task automation by learning from demonstration , 2014, 2014 IEEE International Conference on Robotics and Automation (ICRA).

[3]  Aude Billard,et al.  Learning a real time grasping strategy , 2013, 2013 IEEE International Conference on Robotics and Automation.

[4]  Francisco José Madrid-Cuevas,et al.  Automatic generation and detection of highly reliable fiducial markers under occlusion , 2014, Pattern Recognit..

[5]  Aude Billard,et al.  Learning object-level impedance control for robust grasping and dexterous manipulation , 2014, 2014 IEEE International Conference on Robotics and Automation (ICRA).

[6]  Donald J. Berndt,et al.  Using Dynamic Time Warping to Find Patterns in Time Series , 1994, KDD Workshop.

[7]  Cong Wang,et al.  Statistical Learning Algorithms to Compensate Slow Visual Feedback for Industrial Robots , 2015 .

[8]  Sotiris Makris,et al.  Intuitive dual arm robot programming for assembly operations , 2014 .

[9]  John Norrish,et al.  Recent Progress on Programming Methods for Industrial Robots , 2010, ISR/ROBOTIK.

[10]  Pieter Abbeel,et al.  Learning by observation for surgical subtasks: Multilateral cutting of 3D viscoelastic and 2D Orthotropic Tissue Phantoms , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[11]  Alois Knoll,et al.  Uncalibrated 3D stereo image-based dynamic visual servoing for robot manipulators , 2013, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[12]  Guang-Zhong Yang,et al.  Real-Time 3D Tracking of Articulated Tools for Robotic Surgery , 2016, MICCAI.

[13]  Vincenzo Lippiello,et al.  Position-Based Visual Servoing in Industrial Multirobot Cells Using a Hybrid Camera Configuration , 2007, IEEE Transactions on Robotics.

[14]  Guang-Zhong Yang,et al.  Autonomous scanning for endomicroscopic mosaicing and 3D fusion , 2016, 2017 IEEE International Conference on Robotics and Automation (ICRA).

[15]  Peter I. Corke,et al.  A tutorial on visual servo control , 1996, IEEE Trans. Robotics Autom..

[16]  Antonio Bandera,et al.  A Survey of Vision-Based Architectures for Robot Learning by Imitation , 2012, Int. J. Humanoid Robotics.

[17]  Jiri Matas,et al.  Forward-Backward Error: Automatic Detection of Tracking Failures , 2010, 2010 20th International Conference on Pattern Recognition.

[18]  Jörn Malzahn,et al.  Tool centered learning from demonstration for robotic arms with visual feedback , 2012, 2012 IEEE International Conference on Robotics and Biomimetics (ROBIO).

[19]  Su-Lin Lee,et al.  A vision-guided dual arm sewing system for stent graft manufacturing , 2016, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[20]  Rafik Mebarki,et al.  2-D Ultrasound Probe Complete Guidance by Visual Servoing Using Image Moments , 2010, IEEE Transactions on Robotics.

[21]  Kenneth Y. Goldberg,et al.  Automating multi-throw multilateral surgical suturing with a mechanical needle guide and sequential convex optimization , 2016, 2016 IEEE International Conference on Robotics and Automation (ICRA).

[22]  V. Lepetit,et al.  EPnP: An Accurate O(n) Solution to the PnP Problem , 2009, International Journal of Computer Vision.

[23]  Aude Billard,et al.  On Learning, Representing, and Generalizing a Task in a Humanoid Robot , 2007, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).