Modeling spatial and temporal variation in motion data

We present a novel method to model and synthesize variation in motion data. Given a few examples of a particular type of motion as input, we learn a generative model that is able to synthesize a family of spatial and temporal variants that are statistically similar to the input examples. The new variants retain the features of the original examples, but are not exact copies of them. We learn a Dynamic Bayesian Network model from the input examples that enables us to capture properties of conditional independence in the data, and model it using a multivariate probability distribution. We present results for a variety of human motion, and 2D handwritten characters. We perform a user study to show that our new variants are less repetitive than typical game and crowd simulation approaches of re-playing a small number of existing motion clips. Our technique can synthesize new variants efficiently and has a small memory requirement.

[1]  Ken Perlin,et al.  Real Time Responsive Animation with Personality , 1995, IEEE Trans. Vis. Comput. Graph..

[2]  Zoubin Ghahramani,et al.  Learning Dynamic Bayesian Networks , 1997, Summer School on Neural Networks.

[3]  J. Hahn,et al.  Interpolation Synthesis of Articulated Figure Motion , 1997, IEEE Computer Graphics and Applications.

[4]  Daniel M. Wolpert,et al.  Signal-dependent noise determines motor planning , 1998, Nature.

[5]  Kevin P. Murphy,et al.  Learning the Structure of Dynamic Probabilistic Networks , 1998, UAI.

[6]  Marco Gori,et al.  Adaptive Processing of Sequences and Data Structures , 1998, Lecture Notes in Computer Science.

[7]  Michael F. Cohen,et al.  Verbs and Adverbs: Multidimensional Motion Interpolation , 1998, IEEE Computer Graphics and Applications.

[8]  Jessica K. Hodgins,et al.  The Effects of Noise on the Perception of Animated Human Running , 1999, Computer Animation and Simulation.

[9]  Christoph Bregler,et al.  Animating by multi-level sampling , 2000, Proceedings Computer Animation 2000.

[10]  David A. Forsyth,et al.  Sampling plausible solutions to multi-body constraint problems , 2000, SIGGRAPH.

[11]  Aaron Hertzmann,et al.  Style machines , 2000, SIGGRAPH 2000.

[12]  Harry Shum,et al.  Motion texture: a two-level statistical model for character motion synthesis , 2002, ACM Trans. Graph..

[13]  Christoph Bregler,et al.  Motion capture assisted animation: texturing and synthesis , 2002, ACM Trans. Graph..

[14]  Lucas Kovar,et al.  Footskate cleanup for motion capture editing , 2002, SCA '02.

[15]  John Hart,et al.  ACM Transactions on Graphics , 2004, SIGGRAPH 2004.

[16]  Maja J. Mataric,et al.  A spatio-temporal extension to Isomap nonlinear dimension reduction , 2004, ICML.

[17]  Jovan Popović,et al.  Style translation for human motion , 2005, ACM Trans. Graph..

[18]  Daniel Thalmann,et al.  Crowd and group animation , 2005, SIGGRAPH Courses.

[19]  Carol O'Sullivan,et al.  Crowd creation pipeline for games , 2006 .

[20]  I. Simon,et al.  Reconstructing dynamic regulatory maps , 2007, Molecular systems biology.

[21]  Demetri Terzopoulos,et al.  A decision network framework for the behavioral animation of virtual humans , 2007, SCA '07.

[22]  Richard E. Neapolitan,et al.  Learning Bayesian networks , 2007, KDD '07.

[23]  Jessica K. Hodgins,et al.  Constraint-based motion optimization using a statistical dynamic model , 2007, ACM Trans. Graph..

[24]  Barbara Yersin,et al.  Unique Instances for Crowds , 2008 .

[25]  Carol O'Sullivan,et al.  Clone attack! Perception of crowd variety , 2008, ACM Trans. Graph..

[26]  David J. Fleet,et al.  This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE Gaussian Process Dynamical Model , 2007 .

[27]  David A. Forsyth,et al.  Generalizing motion edits with Gaussian processes , 2009, ACM Trans. Graph..