Action sequence reproduction based on automatic segmentation and Object-Action Complexes

Teaching robots object manipulation skills is a complex task that involves multimodal perception and knowledge about processing the sensor data. In this paper, we show a concept for humanoid robots in household environments with a variety of related objects and actions. Following the paradigms of Programming by Demonstration (PbD), we provide a flexible approach that enables a robot to adaptively reproduce an action sequence demonstrated by a human. The obtained human motion data with involved objects is segmented into semantic conclusive sub-actions by the detection of relations between the objects and the human actor. Matching actions are chosen from a library of Object-Action Complexes (OACs) using the preconditions and effects of each sub-action. The resulting sequence of OACs is parameterized for the execution on a humanoid robot depending on the observed action sequence and on the state of the environment during execution. The feasibility of this approach is shown in an exemplary kitchen scenario, where the robot has to prepare a dough.

[1]  Wojciech Matusik,et al.  Practical motion capture in everyday surroundings , 2007, ACM Trans. Graph..

[2]  Jernej Barbic,et al.  Segmenting Motion Capture Data into Distinct Behaviors , 2004, Graphics Interface.

[3]  Aude Billard,et al.  Learning Stable Nonlinear Dynamical Systems With Gaussian Mixture Models , 2011, IEEE Transactions on Robotics.

[4]  Mark Steedman,et al.  Object-Action Complexes: Grounded abstractions of sensory-motor processes , 2011, Robotics Auton. Syst..

[5]  Ales Ude,et al.  Trajectory generation from noisy positions of object features for teaching robot paths , 1993, Robotics Auton. Syst..

[6]  Ales Ude,et al.  Programming full-body movements for humanoid robots by observation , 2004, Robotics Auton. Syst..

[7]  Gordon Cheng,et al.  Making Object Learning and Recognition an Active Process , 2008, Int. J. Humanoid Robotics.

[8]  Daniel C. Halbert,et al.  Programming by Example , 2010, Encyclopedia of Machine Learning.

[9]  Tamim Asfour,et al.  Synthesizing object receiving motions of humanoid robots with human motion database , 2013, 2013 IEEE International Conference on Robotics and Automation.

[10]  Pedram Azad Visual Perception for Manipulation and Imitation in Humanoid Robots , 2008, Cognitive Systems Monographs.

[11]  TWO-WEEK Loan COpy,et al.  University of California , 1886, The American journal of dental science.

[12]  S. Bocionek,et al.  Robot programming by Demonstration (RPD): Supporting the induction by human interaction , 1996, Machine Learning.

[13]  Stefan Schaal,et al.  Encoding of periodic and their transient motions by a single dynamic movement primitive , 2012, 2012 12th IEEE-RAS International Conference on Humanoid Robots (Humanoids 2012).

[14]  Dana Kulic,et al.  Online Segmentation and Clustering From Continuous Observation of Whole Body Motions , 2009, IEEE Transactions on Robotics.

[15]  Aude Billard,et al.  Recognition and reproduction of gestures using a probabilistic framework combining PCA, ICA and HMM , 2005, ICML.

[16]  Andrej Gams,et al.  On-line periodic movement and force-profile learning for adaptation to new surfaces , 2010, 2010 10th IEEE-RAS International Conference on Humanoid Robots.

[17]  Jun Nakanishi,et al.  Movement imitation with nonlinear dynamical systems in humanoid robots , 2002, Proceedings 2002 IEEE International Conference on Robotics and Automation (Cat. No.02CH37292).

[18]  Eren Erdal Aksoy,et al.  Categorizing object-action relations from semantic scene graphs , 2010, 2010 IEEE International Conference on Robotics and Automation.

[19]  Ales Ude,et al.  Discovering New Motor Primitives in Transition Graphs , 2012, IAS.

[20]  Dana Kulic,et al.  Incremental learning of human behaviors using hierarchical hidden Markov models , 2010, 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[21]  Aude Billard,et al.  Goal-Directed Imitation in a Humanoid Robot , 2005, Proceedings of the 2005 IEEE International Conference on Robotics and Automation.

[22]  S. Münch,et al.  Robot Programming by Demonstration (RPD) - Using Machine Learning and User Interaction Methods for the Development of Easy and Comfortable Robot Programming Systems , 2000 .

[23]  Eren Erdal Aksoy,et al.  Learning the semantics of object–action relations by observation , 2011, Int. J. Robotics Res..

[24]  Stefan Schaal,et al.  http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained , 2007 .

[25]  Tamim Asfour,et al.  Imitation Learning of Dual-Arm Manipulation Tasks in Humanoid Robots , 2006, 2006 6th IEEE-RAS International Conference on Humanoid Robots.

[26]  Jun Morimoto,et al.  Task-Specific Generalization of Discrete and Periodic Dynamic Movement Primitives , 2010, IEEE Transactions on Robotics.

[27]  Jessica K. Hodgins,et al.  Interactive control of avatars animated with human motion data , 2002, SIGGRAPH.

[28]  Masayuki Inaba,et al.  Learning by watching: extracting reusable task knowledge from visual observation of human performance , 1994, IEEE Trans. Robotics Autom..

[29]  T. Asfour,et al.  ARMAR-III : A HUMANOID PLATFORM FOR PERCEPTION-ACTION INTEGRATION , 2006 .