Motion templates for automatic classification and retrieval of motion capture data

This paper presents new methods for automatic classification and retrieval of motion capture data facilitating the identification of logically related motions scattered in some database. As the main ingredient, we introduce the concept of motion templates (MTs), by which the essence of an entire class of logically related motions can be captured in an explicit and semantically interpretable matrix representation. The key property of MTs is that the variable aspects of a motion class can be automatically masked out in the comparison with unknown motion data. This facilitates robust and efficient motion retrieval even in the presence of large spatio-temporal variations. Furthermore, we describe how to learn an MT for a specific motion class from a given set of training motions. In our extensive experiments, which are based on several hours of motion data, MTs proved to be a powerful concept for motion annotation and retrieval, yielding accurate results even for highly variable motion classes such as cartwheels, lying down, or throwing motions.

[1]  Jessica K. Hodgins,et al.  Performance animation from low-dimensional control signals , 2005, SIGGRAPH 2005.

[2]  Christoph Bregler,et al.  Motion capture assisted animation: texturing and synthesis , 2002, ACM Trans. Graph..

[3]  Maja J. Mataric,et al.  Automated Derivation of Primitives for Movement Classification , 2000, Auton. Robots.

[4]  Daniel Cohen-Or,et al.  Action synopsis: pose selection and illustration , 2005, ACM Trans. Graph..

[5]  Lance Williams,et al.  Motion signal processing , 1995, SIGGRAPH.

[6]  Tido Röder,et al.  Efficient content-based retrieval of motion capture data , 2005, SIGGRAPH 2005.

[7]  Eugene Fiume,et al.  An efficient search algorithm for motion data using weighted PCA , 2005, SCA '05.

[8]  Victor B. Zordan,et al.  Dynamic response for motion capture animation , 2005, SIGGRAPH 2005.

[9]  Michael Gleicher,et al.  Automated extraction and parameterization of motions in large data sets , 2004, SIGGRAPH 2004.

[10]  Wei Wang,et al.  A system for analyzing and indexing human-motion databases , 2005, SIGMOD '05.

[11]  Ling Guan,et al.  Quantifying and recognizing human movement patterns from monocular video images-part II: applications to biometrics , 2004, IEEE Transactions on Circuits and Systems for Video Technology.

[12]  David A. Forsyth,et al.  Automatic Annotation of Everyday Movements , 2003, NIPS.

[13]  Lucas Kovar,et al.  Flexible automatic motion blending with registration curves , 2003, SCA '03.

[14]  Zoran Popovic,et al.  Motion warping , 1995, SIGGRAPH.

[15]  Jernej Barbic,et al.  Segmenting Motion Capture Data into Distinct Behaviors , 2004, Graphics Interface.

[16]  Martin A. Giese,et al.  Morphable Models for the Analysis and Synthesis of Complex Motion Patterns , 2000, International Journal of Computer Vision.

[17]  Yasuhiko Sakamoto,et al.  Motion map: image-based retrieval and segmentation of motion data , 2004, SCA '04.

[18]  Dimitrios Gunopulos,et al.  Indexing Large Human-Motion Databases , 2004, VLDB.

[19]  Chih-Yi Chiu,et al.  Content-based retrieval for human motion data , 2004, J. Vis. Commun. Image Represent..

[20]  Victor B. Zordan,et al.  Dynamic response for motion capture animation , 2005, SIGGRAPH '05.

[21]  D. Talkin Fundamentals of Speech Synthesis and Speech Recognition , 1996 .

[22]  David A. Forsyth,et al.  Motion synthesis from annotations , 2003, ACM Trans. Graph..

[23]  Aaron Hertzmann,et al.  Style machines , 2000, SIGGRAPH 2000.

[24]  Kari Pulli,et al.  Style translation for human motion , 2005, SIGGRAPH 2005.

[25]  Michael F. Cohen,et al.  Verbs and Adverbs: Multidimensional Motion Interpolation , 1998, IEEE Computer Graphics and Applications.

[26]  Ling Guan,et al.  Quantifying and recognizing human movement patterns from monocular video Images-part I: a new framework for modeling human motion , 2004, IEEE Transactions on Circuits and Systems for Video Technology.