Repurposing hand animation for interactive applications

In this paper we describe a method for automatically animating interactive characters based on an existing corpus of key-framed hand-animation. The method learns separate low-dimensional embeddings for subsets of the hand-animation corresponding to different semantic labels. These embeddings use the Gaussian Process Latent Variable Model to map high-dimensional rig control parameters to a three-dimensional latent space. By using a particle model to move within one of these latent spaces, the method can generate novel animations corresponding to the space's semantic label. Bridges link each pose in one latent space that is similar to a pose in another space. Animations corresponding to a transitions between semantic labels are generated by creating animation paths that move though one latent space and traverse a bridge into another. We demonstrate this method by using it to interactively animate a character as it plays a simple game with the user. The character is from a previously produced animated film and the data we use for training is the data that was used to animate the character in the film. The animated motion from the film represents an enormous investment of skillful work. Our method allows this work to be repurposed and reused for interactively animating the familiar character from the film.

[1]  Nadia Magnenat-Thalmann,et al.  Automatic 3D cloning and real-time animation of a human face , 1997, Proceedings. Computer Animation '97 (Cat. No.97TB100120).

[2]  Sergey Levine,et al.  Continuous character control with low-dimensional embeddings , 2012, ACM Trans. Graph..

[3]  Matthias Zwicker,et al.  Real-time planning for parameterized human motion , 2008, SCA '08.

[4]  David J. Fleet,et al.  Gaussian Process Dynamical Models , 2005, NIPS.

[5]  John P. Lewis,et al.  Pose Space Deformation: A Unified Approach to Shape Interpolation and Skeleton-Driven Deformation , 2000, SIGGRAPH.

[6]  C. Karen Liu,et al.  Learning bicycle stunts , 2014, ACM Trans. Graph..

[7]  Thoms M. Levergood,et al.  DEC face: an automatic lip-synchronization algorithm for synthetic faces , 1993 .

[8]  Michael Hutchinson,et al.  LibEE: a multithreaded dependency graph for character animation , 2012, DigiPro '12.

[9]  Markus H. Gross,et al.  Efficient simulation of secondary motion in rig-space , 2013, SCA '13.

[10]  John P. Lewis,et al.  Automated lip-synch and speech synthesis for character animation , 1987, CHI 1987.

[11]  Ziv Bar-Joseph,et al.  Modeling spatial and temporal variation in motion data , 2009, ACM Trans. Graph..

[12]  Thomas W. Sederberg,et al.  Free-form deformation of solid geometric models , 1986, SIGGRAPH.

[13]  Matthew Brand,et al.  Voice puppetry , 1999, SIGGRAPH.

[14]  J. P. Lewis,et al.  Automated lip-synch and speech synthesis for character animation , 1987, CHI '87.

[15]  Aaron Hertzmann,et al.  Style-based inverse kinematics , 2004, ACM Trans. Graph..

[16]  Kun Zhou,et al.  3D shape regression for real-time facial animation , 2013, ACM Trans. Graph..

[17]  Sheldon Andrews,et al.  Goal directed multi-finger manipulation: Control policies and analysis , 2013, Comput. Graph..

[18]  Jing Xiao,et al.  Vision-based control of 3D facial animation , 2003, SCA '03.

[19]  David J. Fleet,et al.  Priors for people tracking from small training sets , 2005, Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1.

[20]  Harry Shum,et al.  Face poser: Interactive modeling of 3D facial expressions using facial priors , 2009, TOGS.

[21]  Tomohiko Mukai,et al.  Geostatistical motion interpolation , 2005, SIGGRAPH '05.

[22]  Okan Arikan,et al.  Interactive motion generation from examples , 2002, ACM Trans. Graph..

[23]  David J. Fleet,et al.  This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE Gaussian Process Dynamical Model , 2007 .

[24]  Daniel Thalmann,et al.  Joint-dependent local deformations for hand animation and object grasping , 1989 .

[25]  Yangang Wang,et al.  Online modeling for realtime facial animation , 2013, ACM Trans. Graph..

[26]  Taku Komura,et al.  Learning an inverse rig mapping for character animation , 2015, Symposium on Computer Animation.

[27]  J. Baumgarte Stabilization of constraints and integrals of motion in dynamical systems , 1972 .

[28]  Zoran Popovic,et al.  Compact character controllers , 2009, ACM Trans. Graph..

[29]  Lucas Kovar,et al.  Motion graphs , 2002, SIGGRAPH '08.

[30]  Neil D. Lawrence,et al.  Hierarchical Gaussian process latent variable models , 2007, ICML '07.

[31]  Michael Hutchinson,et al.  DreamWorks animation facial motion and deformation system , 2015 .

[32]  Jinxiang Chai,et al.  Motion graphs++ , 2012, ACM Trans. Graph..

[33]  Eugene Fiume,et al.  Wires: a geometric deformation technique , 1998, SIGGRAPH.

[34]  Jessica K. Hodgins,et al.  Interactive control of avatars animated with human motion data , 2002, SIGGRAPH.

[35]  Zoran Popović,et al.  Contact-aware nonlinear control of dynamic characters , 2009, SIGGRAPH 2009.

[36]  Aaron Hertzmann,et al.  Style machines , 2000, SIGGRAPH 2000.

[37]  Yeongho Seol,et al.  Artist friendly facial animation retargeting , 2011, ACM Trans. Graph..

[38]  Ziv Bar-Joseph,et al.  Modeling spatial and temporal variation in motion data , 2009, SIGGRAPH 2009.

[39]  Alan H. Barr,et al.  Global and local deformations of solid primitives , 1984, SIGGRAPH.

[40]  C. Karen Liu,et al.  Synthesis of Responsive Motion Using a Dynamic Model , 2010, Comput. Graph. Forum.

[41]  Philippe Beaudoin,et al.  Robust task-based control policies for physics-based characters , 2009, SIGGRAPH 2009.

[42]  Joaquin Quiñonero Candela,et al.  Local distance preservation in the GP-LVM through back constraints , 2006, ICML.