Learning Two-Person Interaction Models for Responsive Synthetic Humanoids

JVRB, 11(2014), no. 1. - Imitation learning is a promising approach for generating life-like behaviors of virtual humans and humanoid robots. So far, however, imitation learning has been mostly restricted to single agent settings where observed motions are adapted to new environment conditions but not to the dynamic behavior of interaction partners. In this paper, we introduce a new imitation learning approach that is based on the simultaneous motion capture of two human interaction partners. From the observed interactions, low-dimensional motion models are extracted and a mapping between these motion models is learned. This interaction model allows the real-time generation of agent behaviors that are responsive to the body movements of an interaction partner. The interaction model can be applied both to the animation of virtual characters as well as to the behavior generation for humanoid robots.

[1]  K. Dautenhahn,et al.  Imitation in Animals and Artifacts , 2002 .

[2]  K. Dautenhahn,et al.  The agent-based perspective on imitation , 2002 .

[3]  Heni Ben Amor,et al.  Imitation Learning of Motor Skills for Synthetic Humanoids , 2010, Ausgezeichnete Informatikdissertationen.

[4]  C. K. Liu,et al.  Optimal feedback control for character animation using an abstract model , 2010, ACM Trans. Graph..

[5]  Kerstin Dautenhahn,et al.  Solving the Correspondence Problem Between Dissimilarly Embodied Robotic Arms Using the ALICE Imitation Mechanism , 2003 .

[6]  Jeffrey L. Elman,et al.  Finding Structure in Time , 1990, Cogn. Sci..

[7]  Bernhard Schölkopf,et al.  Probabilistic Modeling of Human Movements for Intention Inference , 2012, Robotics: Science and Systems.

[8]  Darwin G. Caldwell,et al.  Evaluation of a probabilistic approach to learn and reproduce gestures by imitation , 2010, 2010 IEEE International Conference on Robotics and Automation.

[9]  Heni Ben Amor,et al.  Kinesthetic Bootstrapping: Teaching Motor Skills to Humanoid Robots through Physical Interaction , 2009, KI.

[10]  Jun Tani,et al.  Learning Multiple Goal-Directed Actions Through Self-Organization of a Dynamic Neural Network Model: A Humanoid Robot Experiment , 2008, Adapt. Behav..

[11]  Steve C. Maddock,et al.  Adapting motion capture data using weighted real-time inverse kinematics , 2005, CIE.

[12]  Taku Komura,et al.  Interactive animation of virtual humans based on motion capture data , 2009, Comput. Animat. Virtual Worlds.

[13]  Taku Komura,et al.  Animating reactive motion using momentum-based inverse kinematics: Motion Capture and Retrieval , 2005 .

[14]  Evan Herbst,et al.  Character animation in two-player adversarial games , 2010, TOGS.

[15]  Ahmed M. Elgammal,et al.  Inferring 3D body pose from silhouettes using activity manifold learning , 2004, Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004..

[16]  Jun Tani,et al.  Emergence of Functional Hierarchy in a Multiple Timescale Neural Network Model: A Humanoid Robot Experiment , 2008, PLoS Comput. Biol..

[17]  Tom Ziemke,et al.  Radar image segmentation using recurrent artificial neural networks , 1996, Pattern Recognit. Lett..

[18]  Takashi Minato,et al.  Physical Human-Robot Interaction: Mutual Learning and Adaptation , 2012, IEEE Robotics & Automation Magazine.

[19]  Yoshihiko Nakamura,et al.  Mimesis Model from Partial Observations for a Humanoid Robot , 2010, Int. J. Robotics Res..

[20]  Vasile Palade,et al.  Multi-Classifier Systems: Review and a roadmap for developers , 2006, Int. J. Hybrid Intell. Syst..

[21]  C. Karen Liu,et al.  Performance-based control interface for character animation , 2009, SIGGRAPH 2009.

[22]  J. T. Spooner,et al.  Adaptive and Learning Systems for Signal Processing, Communications, and Control , 2006 .