Modeling spatial and temporal variation in motion data

We present a novel method to model and synthesize variation in motion data. Given a few examples of a particular type of motion as input, we learn a generative model that is able to synthesize a family of spatial and temporal variants that are statistically similar to the input examples. The new variants retain the features of the original examples, but are not exact copies of them. We learn a Dynamic Bayesian Network model from the input examples that enables us to capture properties of conditional independence in the data, and model it using a multivariate probability distribution. We present results for a variety of human motion, and 2D handwritten characters. We perform a user study to show that our new variants are less repetitive than typical game and crowd simulation approaches of re-playing a small number of existing motion clips. Our technique can synthesize new variants efficiently and has a small memory requirement.

[1]  Richard E. Neapolitan,et al.  Learning Bayesian networks , 2007, KDD '07.

[2]  Ken Perlin,et al.  Real Time Responsive Animation with Personality , 1995, IEEE Trans. Vis. Comput. Graph..

[3]  David A. Forsyth,et al.  Generalizing motion edits with Gaussian processes , 2009, ACM Trans. Graph..

[4]  Taesoo Kwon,et al.  Two-Character Motion Analysis and Synthesis , 2008, IEEE Transactions on Visualization and Computer Graphics.

[5]  Barbara Yersin,et al.  Unique Instances for Crowds , 2008 .

[6]  Lucas Kovar,et al.  Footskate cleanup for motion capture editing , 2002, SCA '02.

[7]  Zoubin Ghahramani,et al.  Learning Dynamic Bayesian Networks , 1997, Summer School on Neural Networks.

[8]  Carol O'Sullivan,et al.  Crowd creation pipeline for games , 2006 .

[9]  Carol O'Sullivan,et al.  Clone attack! Perception of crowd variety , 2008, SIGGRAPH 2008.

[10]  Michael F. Cohen,et al.  Verbs and Adverbs: Multidimensional Motion Interpolation , 1998, IEEE Computer Graphics and Applications.

[11]  David A. Forsyth,et al.  Sampling plausible solutions to multi-body constraint problems , 2000, SIGGRAPH.

[12]  Jessica K. Hodgins,et al.  The Effects of Noise on the Perception of Animated Human Running , 1999, Computer Animation and Simulation.

[13]  Harry Shum,et al.  Motion texture: a two-level statistical model for character motion synthesis , 2002, ACM Trans. Graph..

[14]  David J. Fleet,et al.  This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE Gaussian Process Dynamical Model , 2007 .

[15]  Daniel Thalmann,et al.  Crowd and group animation , 2004, SIGGRAPH '04.

[16]  Maja J. Mataric,et al.  A spatio-temporal extension to Isomap nonlinear dimension reduction , 2004, ICML.

[17]  Daniel Thalmann,et al.  Unique Character Instances for Crowds , 2009, IEEE Computer Graphics and Applications.

[18]  Kevin P. Murphy,et al.  Learning the Structure of Dynamic Probabilistic Networks , 1998, UAI.

[19]  Jessica K. Hodgins,et al.  Constraint-based motion optimization using a statistical dynamic model , 2007, SIGGRAPH 2007.

[20]  Demetri Terzopoulos,et al.  A decision network framework for the behavioral animation of virtual humans , 2007, SCA '07.

[21]  Christoph Bregler,et al.  Motion capture assisted animation: texturing and synthesis , 2002, ACM Trans. Graph..

[22]  Daniel M. Wolpert,et al.  Making smooth moves , 2022 .

[23]  Christoph Bregler,et al.  Animating by multi-level sampling , 2000, Proceedings Computer Animation 2000.

[24]  Kari Pulli,et al.  Style translation for human motion , 2005, SIGGRAPH 2005.

[25]  Aaron Hertzmann,et al.  Style machines , 2000, SIGGRAPH 2000.

[26]  I. Simon,et al.  Reconstructing dynamic regulatory maps , 2007, Molecular systems biology.