Surface motion graphs for character animation from 3D video

Multiple view reconstruction of human performance as a 3D video has advanced to the stage of capturing detailed non-rigid dynamic surface shape and appearance of the body, clothing and hair during motion [Aguiar et al. 2008; Starck and Hilton 2007]. Full 3D video scene capture holds the potential to create truly realistic synthetic animated content by reproducing the dynamics of shape and appearance currently missing from marker-based motion capture. However, the acquisition results are in an unstructured volumetric or mesh approximation of the surface shape at each frame without temporal correspondence, which makes the reuse of this kind of data more challenging than conventional mocap data. In this paper, we introduce a framework that automatically constructs motion graphs for 3D video sequences and synthesizes novel animations to best satisfy user specified constraints on movement, location and timing.

[1]  Adrian Hilton,et al.  A Study of Shape Similarity for Temporal Surface Sequences of People , 2007, Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM 2007).

[2]  Lucas Kovar,et al.  Motion graphs , 2002, SIGGRAPH Classes.

[3]  Adrian Hilton,et al.  Surface Capture for Performance-Based Animation , 2007, IEEE Computer Graphics and Applications.

[4]  Lucas Kovar,et al.  Motion graphs , 2002, SIGGRAPH '08.

[5]  Hans-Peter Seidel,et al.  Performance capture from sparse multi-view video , 2008, ACM Trans. Graph..

[6]  Hans-Peter Seidel,et al.  Performance capture from sparse multi-view video , 2008, SIGGRAPH 2008.

[7]  Wojciech Matusik,et al.  Articulated mesh animation from multi-view silhouettes , 2008, ACM Trans. Graph..