Using human gestures and generic skills to instruct a mobile robot arm in a feeder filling scenario

Mobile robots that have the ability to cooperate with humans are able to provide new possibilities to manufacturing industries. In this paper, we discuss our mobile robot arm that can a) provide assistance at different locations in a factory and b) be programmed using complex human actions such as pointing in Take this object. We discuss the use of the mobile robot for a feeding scenario where a human operator specifies the parts and the feeders through pointing gestures. The system is partially built using generic robotic skills. Through extensive experiments, we evaluate different aspects of the system.

[1]  Danica Kragic,et al.  Learning Actions from Observations , 2010, IEEE Robotics & Automation Magazine.

[2]  Katsushi Ikeuchi,et al.  Towards an assembly plan from observation. I. Assembly task recognition using face-contact relations (polyhedral objects) , 1992, Proceedings 1992 IEEE International Conference on Robotics and Automation.

[3]  Yoshihiko Nakamura,et al.  Embodied Symbol Emergence Based on Mimesis Theory , 2004, Int. J. Robotics Res..

[4]  G. Rizzolatti,et al.  Neurophysiological mechanisms underlying the understanding and imitation of action , 2001, Nature Reviews Neuroscience.

[5]  Biing-Hwang Juang,et al.  Fundamentals of speech recognition , 1993, Prentice Hall signal processing series.

[6]  Katsushi Ikeuchi,et al.  Recognition of human task by attention point analysis , 2000, Proceedings. 2000 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2000) (Cat. No.00CH37113).

[7]  Aude Billard,et al.  Discriminative and adaptive imitation in uni-manual and bi-manual tasks , 2006, Robotics Auton. Syst..

[8]  G. Orban,et al.  Observing Others: Multiple Action Representation in the Frontal Lobe , 2005, Science.

[9]  A F Bobick,et al.  Movement, activity and action: the role of knowledge in the perception of motion. , 1997, Philosophical transactions of the Royal Society of London. Series B, Biological sciences.

[10]  Masayuki Inaba,et al.  Learning by watching: extracting reusable task knowledge from visual observation of human performance , 1994, IEEE Trans. Robotics Autom..

[11]  Tamim Asfour,et al.  Imitation Learning of Dual-Arm Manipulation Tasks in Humanoid Robots , 2006, 2006 6th IEEE-RAS International Conference on Humanoid Robots.

[12]  Aaron F. Bobick,et al.  On Human Action , 2011, Visual Analysis of Humans.

[13]  Ole Madsen,et al.  Multiple part feeding – real‐world application for mobile manipulators , 2012 .

[14]  W. Eric L. Grimson,et al.  Learning Patterns of Activity Using Real-Time Tracking , 2000, IEEE Trans. Pattern Anal. Mach. Intell..

[15]  Christopher W. Geib,et al.  The meaning of action: a review on action recognition and mapping , 2007, Adv. Robotics.

[16]  Thomas B. Moeslund,et al.  A Natural Interface to a Virtual Environment through Computer Vision-Estimated Pointing Gestures , 2001, Gesture Workshop.

[17]  Ian D. Reid,et al.  Behaviour understanding in video: a combined method , 2005, Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1.