Markerless motion capture of complex full-body movement for character anima-tion

Vision-based full-body tracking aims to reproduce the performance of current commercial marker-based motion capture methods in a system which can be run using conventional cameras and without the use of special apparel or other equipment, improving usability in existing application domains and opening up new possibilities since the methods can be applied to image sequences acquired from any source. We present results from a system able to perform robust visual tracking with an articulated body model, using data from multiple cameras. Our approach to searching through the high-dimensional model configuration space is an algorithm called annealed particle filtering which finds the best fit to image data via multiple-layer propagation of a stochastic particle set. This algorithm efficiently searches the configuration space without the need for restrictive dynamical models, permitting tracking of agile, varied movement. The data acquired can readily be applied to the animation of CG characters. Movie files illustrating the results in this paper may be obtained from http://www.robots.ox.ac.uk/~ajd/HMC/

[1]  Michael Isard,et al.  Partitioned Sampling, Articulated Objects, and Interface-Quality Hand Tracking , 2000, ECCV.

[2]  Andrew Blake,et al.  Articulated body motion capture by annealed particle filtering , 2000, Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No.PR00662).

[3]  Andrew Blake,et al.  Tracking through singularities and discontinuities by random sampling , 1999, Proceedings of the Seventh IEEE International Conference on Computer Vision.

[4]  Jitendra Malik,et al.  Tracking people with twists and exponential maps , 1998, Proceedings. 1998 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No.98CB36231).

[5]  Larry S. Davis,et al.  W4S: A real-time system detecting and tracking people in 2 1/2D , 1998, ECCV.

[6]  Dana H. Ballard,et al.  Computer Vision , 1982 .

[7]  David C. Hogg,et al.  Generating Spatiotemporal Models from Examples , 1995, BMVC.

[8]  Larry S. Davis,et al.  3-D model-based tracking of humans in action: a multi-view approach , 1996, Proceedings CVPR IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[9]  Michael J. Black,et al.  Cardboard people: a parameterized model of articulated image motion , 1996, Proceedings of the Second International Conference on Automatic Face and Gesture Recognition.

[10]  Chris Harris,et al.  Tracking with rigid models , 1993 .

[11]  Michael J. Black,et al.  Cardboard people: A parametrized model of articulated motion , 1996 .

[12]  Larry S. Davis,et al.  W4S : A real-time system for detecting and tracking people in 2 D , 1998, eccv 1998.

[13]  Michael Isard,et al.  Contour Tracking by Stochastic Propagation of Conditional Density , 1996, ECCV.

[14]  Andrew Blake,et al.  Learning Dynamics of Complex Motions from Image Sequences , 1996, ECCV.

[15]  David J. Fleet,et al.  Stochastic Tracking of 3D Human Figures Using 2D Image Motion , 2000, ECCV.

[16]  William T. Freeman,et al.  Bayesian Reconstruction of 3D Human Motion from Single-Camera Video , 1999, NIPS.

[17]  Andrew W. Fitzgibbon,et al.  Improving Augmented Reality using Image and Scene Constraints , 1999, BMVC.