Precomputing avatar behavior from human motion data

Creating controllable, responsive avatars is an important problem in computer games and virtual environments. Recently, large collections of motion capture data have been exploited for increased realism in avatar animation and control. Large motion sets have the advantage of accommodating a broad variety of natural human motion. However, when a motion set is large, the time required to identify an appropriate sequence of motions is the bottleneck for achieving interactive avatar control. In this paper, we present a novel method of precomputing avatar behavior from unlabelled motion data in order to animate and control avatars at minimal runtime cost. Based on dynamic programming, our method finds a control policy that indicates how the avatar should act in any given situation. We demonstrate the effectiveness of our approach through examples that include avatars interacting with each other and with the user.

[1]  Thomas W. Calvert,et al.  Goal-directed, dynamic animation of human walking , 1989, SIGGRAPH.

[2]  Norman I. Badler,et al.  Real-Time Control of a Virtual Human Using Minimal Sensors , 1993, Presence: Teleoperators & Virtual Environments.

[3]  Andrew W. Moore,et al.  The parti-game algorithm for variable resolution reinforcement learning in multidimensional state-spaces , 2004, Machine Learning.

[4]  Joe Marks,et al.  Spacetime constraints revisited , 1993, SIGGRAPH.

[5]  Karl Sims,et al.  Evolving virtual creatures , 1994, SIGGRAPH.

[6]  Maja J. Mataric,et al.  Reward Functions for Accelerated Learning , 1994, ICML.

[7]  Bruce Blumberg,et al.  Multi-level direction of autonomous creatures for real-time virtual environments , 1995, SIGGRAPH.

[8]  Demetri Terzopoulos,et al.  Automated learning of muscle-actuated locomotion through control abstraction , 1995, SIGGRAPH.

[9]  Daniel Thalmann,et al.  A real time anatomical converter for human motion capture , 1996 .

[10]  Andrew W. Moore,et al.  Reinforcement Learning: A Survey , 1996, J. Artif. Intell. Res..

[11]  Ken Perlin,et al.  Improv: a system for scripting interactive actors in virtual worlds , 1996, SIGGRAPH.

[12]  Joshua M. Stuart,et al.  Using Chaos to Generate Choreographic Variations , 1997 .

[13]  Bruce Blumberg,et al.  Swamped! using plush toys to direct autonomous animated characters , 1998, SIGGRAPH '98.

[14]  Geoffrey E. Hinton,et al.  NeuroAnimator: fast neural network emulation and control of physics-based models , 1998, SIGGRAPH.

[15]  Richard S. Sutton,et al.  Introduction to Reinforcement Learning , 1998 .

[16]  Sudhanshu Kumar Semwal,et al.  Mapping Algorithms for Real-Time Control of an Avatar Using Eight Sensors , 1998, Presence.

[17]  Norman I. Badler,et al.  Design of a Virtual Human Presenter , 2000, IEEE Computer Graphics and Applications.

[18]  Christoph Bregler,et al.  Animating by multi-level sampling , 2000, Proceedings Computer Animation 2000.

[19]  Irfan A. Essa,et al.  Machine Learning for Video-Based Rendering , 2000, NIPS.

[20]  R. Bowden Learning Statistical Models of Human Motion , 2000 .

[21]  Adrian Hilton,et al.  Realistic synthesis of novel human movements from a database of motion capture examples , 2000, Proceedings Workshop on Human Motion.

[22]  Richard Szeliski,et al.  Video textures , 2000, SIGGRAPH.

[23]  Aaron Hertzmann,et al.  Style machines , 2000, SIGGRAPH 2000.

[24]  David C. Hogg,et al.  Learning Variable-Length Markov Models of Behavior , 2001, Comput. Vis. Image Underst..

[25]  Sung Yong Shin,et al.  Computer puppetry: An importance-based approach , 2001, TOGS.

[26]  Petros Faloutsos,et al.  Composable controllers for physics-based character animation , 2001, SIGGRAPH.

[27]  Harry Shum,et al.  Motion texture: a two-level statistical model for character motion synthesis , 2002, ACM Trans. Graph..

[28]  Jessica K. Hodgins,et al.  Interactive control of avatars animated with human motion data , 2002, SIGGRAPH.

[29]  Bruce Blumberg,et al.  Integrated learning for interactive synthetic characters , 2002, SIGGRAPH.

[30]  Christoph Bregler,et al.  Motion capture assisted animation: texturing and synthesis , 2002, ACM Trans. Graph..

[31]  Michael J. Black,et al.  Implicit Probabilistic Models of Human Motion for Synthesis and Tracking , 2002, ECCV.

[32]  Okan Arikan,et al.  Interactive motion generation from examples , 2002, ACM Trans. Graph..

[33]  Jessica K. Hodgins,et al.  Motion capture-driven simulations that hit and react , 2002, SCA '02.

[34]  Irfan A. Essa,et al.  Controlled animation of video sprites , 2002, SCA '02.

[35]  Mira Dontcheva,et al.  Layered acting for character animation , 2003, ACM Trans. Graph..

[36]  David A. Forsyth,et al.  Motion synthesis from annotations , 2003, ACM Trans. Graph..

[37]  Pat Hanrahan,et al.  All-frequency shadows using non-linear wavelet lighting approximation , 2003, ACM Trans. Graph..

[38]  Peter-Pike J. Sloan,et al.  Clustered principal components for precomputed radiance transfer , 2003, ACM Trans. Graph..

[39]  Doug L. James,et al.  Precomputing interactive dynamic deformable scenes , 2003, ACM Trans. Graph..

[40]  Sung Yong Shin,et al.  Rhythmic-motion synthesis based on motion-beat analysis , 2003, ACM Trans. Graph..

[41]  Harry Shum,et al.  Bi-scale radiance transfer , 2003, ACM Trans. Graph..

[42]  Sung Yong Shin,et al.  Planning biped locomotion using motion capture data and probabilistic roadmaps , 2003, TOGS.

[43]  Andrew W. Moore,et al.  The Parti-game Algorithm for Variable Resolution Reinforcement Learning in Multidimensional State-spaces , 1993, Machine Learning.

[44]  Andrew W. Moore,et al.  Locally Weighted Learning for Control , 1997, Artificial Intelligence Review.

[45]  Michael Gleicher,et al.  Automated extraction and parameterization of motions in large data sets , 2004, SIGGRAPH 2004.

[46]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[47]  Lucas Kovar,et al.  Motion graphs , 2002, SIGGRAPH '08.