Multi-LeapMotion sensor based demonstration for robotic refine tabletop object manipulation task

Abstract In some complicated tabletop object manipulation task for robotic system, demonstration based control is an efficient way to enhance the stability of execution. In this paper, we use a new optical hand tracking sensor, LeapMotion, to perform a non-contact demonstration for robotic systems. A Multi-LeapMotion hand tracking system is developed. The setup of the two sensors is analyzed to gain a optimal way for efficiently use the informations from the two sensors. Meanwhile, the coordinate systems of the Mult-LeapMotion hand tracking device and the robotic demonstration system are developed. With the recognition to the element actions and the delay calibration, the fusion principles are developed to get the improved and corrected gesture recognition. The gesture recognition and scenario experiments are carried out, and indicate the improvement of the proposed Multi-LeapMotion hand tracking system in tabletop object manipulation task for robotic demonstration.

[1]  Stefan Schaal,et al.  Robot Programming by Demonstration , 2009, Springer Handbook of Robotics.

[2]  Alvaro Uribe-Quevedo,et al.  Hand-based tracking animatronics interaction , 2013, IEEE ISR 2013.

[3]  Weihua Sheng,et al.  Fine manipulative action recognition through sensor fusion , 2015, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[4]  Jianwei Zhang,et al.  A novel optical tracking based tele-control system for tabletop object manipulation tasks , 2015, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[5]  Heni Ben Amor,et al.  Grasp Recognition with Uncalibrated Data Gloves - A Comparison of Classification Methods , 2007, 2007 IEEE Virtual Reality Conference.

[6]  Junsong Yuan,et al.  Robust Part-Based Hand Gesture Recognition Using Kinect Sensor , 2013, IEEE Transactions on Multimedia.

[7]  Jochen J. Steil,et al.  Robots Show Us How to Teach Them: Feedback from Robots Shapes Tutoring Behavior during Action Learning , 2014, PloS one.

[8]  Carme Torras,et al.  Robot learning from demonstration of force-based tasks with multiple solution trajectories , 2011, 2011 15th International Conference on Advanced Robotics (ICAR).

[9]  Jing-Guo Ge Programming by demonstration by optical tracking system for dual arm robot , 2013, IEEE ISR 2013.

[10]  Wei Xiong,et al.  Design of a new type of pneumatic force feedback data glove , 2011, Proceedings of 2011 International Conference on Fluid Power and Mechatronics.

[11]  Pieter Abbeel,et al.  Superhuman performance of surgical tasks by robots using iterative learning from human-guided demonstrations , 2010, 2010 IEEE International Conference on Robotics and Automation.

[12]  Narendra Ahuja,et al.  Extraction of 2D Motion Trajectories and Its Application to Hand Gesture Recognition , 2002, IEEE Trans. Pattern Anal. Mach. Intell..

[13]  Aude Billard,et al.  Learning robotic eye–arm–hand coordination from human demonstration: a coupled dynamical systems approach , 2014, Biological Cybernetics.

[14]  Pietro Zanuttigh,et al.  Hand gesture recognition with leap motion and kinect devices , 2014, 2014 IEEE International Conference on Image Processing (ICIP).

[15]  Sang Hyoung Lee,et al.  Behavior programming by kinesthetic demonstration for a chef robot , 2011, 2011 8th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI).

[16]  Carme Torras,et al.  A robot learning from demonstration framework to perform force-based manipulation tasks , 2013, Intelligent Service Robotics.

[17]  Andrew T. Irish,et al.  Trajectory Learning for Robot Programming by Demonstration Using Hidden Markov Model and Dynamic Time Warping , 2012, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[18]  Chin-Shyurng Fahn,et al.  Development of a data glove with reducing sensors based on magnetic induction , 2005, IEEE Transactions on Industrial Electronics.

[19]  Mircea Nicolescu,et al.  Vision-based hand pose estimation: A review , 2007, Comput. Vis. Image Underst..

[20]  Alexander Verl,et al.  Learning Probabilistic Models to Enhance the Efficiency of Programming-by Demonstration for Industrial Robots , 2010, ISR/ROBOTIK.

[21]  Wei-Kai Chen,et al.  Learning a pick-and-place robot task from human demonstration , 2013, 2013 CACS International Automatic Control Conference (CACS).

[22]  Rong Xiong,et al.  Probabilistic graph based spatial assembly relation inference for programming of assembly task by demonstration , 2015, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[23]  Aude Billard,et al.  Dynamical System Modulation for Robot Learning via Kinesthetic Demonstrations , 2008, IEEE Transactions on Robotics.

[24]  Katsushi Ikeuchi,et al.  Generation of a task model by integrating multiple observations of human demonstrations , 2002, Proceedings 2002 IEEE International Conference on Robotics and Automation (Cat. No.02CH37292).

[25]  Axel Krieger,et al.  Experimental evaluation of contact-less hand tracking systems for tele-operation of surgical tasks , 2014, 2014 IEEE International Conference on Robotics and Automation (ICRA).

[26]  Igor Zubrycki,et al.  Using Integrated Vision Systems: Three Gears and Leap Motion, to Control a 3-finger Dexterous Gripper , 2014, Recent Advances in Automation, Robotics and Measuring Techniques.

[27]  Alex Pentland,et al.  Real-Time American Sign Language Recognition Using Desk and Wearable Computer Based Video , 1998, IEEE Trans. Pattern Anal. Mach. Intell..