Imitation as a first step to social learning in synthetic characters: a graph-based approach

The processes and representations used to generate the behavior of expressive virtual characters are a valuable and largely untapped resource for helping those characters make sense of the world around them. In this paper, we present Max T. Mouse, an anthropomorphic animated mouse character who uses his own motor and behavior representations to interpret the behaviors he sees his friend Morris Mouse performing. Specifically, by using his own motor and action systems as models for the behavioral capabilities of others (a process known as Simulation Theory in the cognitive literature), Max can begin to identify simple goals and motivations for Morris's behavior, an important step towards developing socially intelligent animated characters. Additionally, Max uses a novel motion graph-based movement recognition process in order to accurately parse and imitate Morris's movements and behaviors in real-time and without prior examples, even when provided with limited synthetic visual input. Key contributions of this paper include demonstrating that using the same mechanisms for movement and behavior perception and production allows for an elegant conservation of representation, and that the innate structure of motion graphs can be used to facilitate both movement parsing and movement recognition.

[1]  Bruce Blumberg,et al.  Integrated learning for interactive synthetic characters , 2002, SIGGRAPH.

[2]  J. Cassell,et al.  Nudge nudge wink wink: elements of face-to-face conversation for embodied conversational agents , 2001 .

[3]  Catherine Pelachaud,et al.  Computational Model of Believable Conversational Agents , 2003, Communication in Multiagent Systems.

[4]  Dinesh K. Pai,et al.  FootSee: an interactive animation system , 2003, SCA '03.

[5]  Hyun Joon Shin,et al.  Snap-together motion: assembling run-time animations , 2003, I3D '03.

[6]  Norman I. Badler,et al.  Animation control for real-time virtual humans , 1999, CACM.

[7]  Ying Wu,et al.  Vision-Based Gesture Recognition: A Review , 1999, Gesture Workshop.

[8]  Daniel Thalmann,et al.  Toward Life-Like Agents: Integrating Tasks, Verbal Communication And Behavioural Engines , 2001 .

[9]  Bruce Blumberg,et al.  Multi-level direction of autonomous creatures for real-time virtual environments , 1995, SIGGRAPH.

[10]  R. Gordon Folk Psychology as Simulation , 1986 .

[11]  Dariu Gavrila,et al.  The Visual Analysis of Human Movement: A Survey , 1999, Comput. Vis. Image Underst..

[12]  Jeff I. Lieberman,et al.  Teaching a robot manipulation skills through demonstration , 2004 .

[13]  Aaron F. Bobick,et al.  Realtime online adaptive gesture recognition , 2000, Proceedings 15th International Conference on Pattern Recognition. ICPR-2000.

[14]  Bruce Blumberg,et al.  Leashing the AlphaWolves: mixing user direction with autonomous emotion in a pack of semi-autonomous virtual characters , 2002, SCA '02.

[15]  Jessica K. Hodgins,et al.  Interactive control of avatars animated with human motion data , 2002, SIGGRAPH.

[16]  Cynthia Breazeal,et al.  Learning From and About Others: Towards Using Imitation to Bootstrap the Social Understanding of Others by Robots , 2005, Artificial Life.

[17]  Michael Gleicher,et al.  Retargetting motion to new characters , 1998, SIGGRAPH.

[18]  Norman I. Badler,et al.  Motion Abstraction and Mapping with Spatial Constraints , 1998, CAPTECH.

[19]  Demetri Terzopoulos,et al.  Artificial fishes: physics, locomotion, perception, behavior , 1994, SIGGRAPH.

[20]  Marc Downie,et al.  Animation and Music: The Music and Movement of Synthetic Characters , 2001 .

[21]  Norman I. Badler,et al.  Building parameterized action representations from observation , 2000 .

[22]  A. Meltzoff,et al.  Explaining Facial Imitation: A Theoretical Model. , 1997, Early development & parenting.

[23]  Maja J. Mataric,et al.  Automated Derivation of Primitives for Movement Classification , 2000, Auton. Robots.

[24]  P. Maes,et al.  Old tricks, new dogs: ethology and interactive creatures , 1997 .

[25]  Damian A. Isla,et al.  Creature Smarts: The Art and Architecture of a Virtual Brain , 2001 .

[26]  Stefan Schaal,et al.  Is imitation learning the route to humanoid robots? , 1999, Trends in Cognitive Sciences.

[27]  Ken Perlin,et al.  Improv: a system for scripting interactive actors in virtual worlds , 1996, SIGGRAPH.

[28]  Daphna Buchsbaum,et al.  Imitation and social learning for synthetic characters , 2004 .

[29]  Nadia Magnenat-Thalmann,et al.  Modelling and Motion Capture Techniques for Virtual Environments , 1998, Lecture Notes in Computer Science.

[30]  Okan Arikan,et al.  Interactive motion generation from examples , 2002, ACM Trans. Graph..

[31]  Norman I. Badler,et al.  Representing and parameterizing agent behaviors , 2002 .

[32]  Jernej Barbic,et al.  Segmenting Motion Capture Data into Distinct Behaviors , 2004, Graphics Interface.

[33]  Bruce Blumberg Go with the flow: synthetic vision for autonomous animated creatures , 1997, AGENTS '97.

[34]  Bruce Blumberg,et al.  Object persistence for synthetic creatures , 2002, AAMAS '02.