Capture and synthesis of human motion in video sequences
暂无分享,去创建一个
We present a knowledge-based framework to capture and represent human walkers in video. The system models the human body as an articulated object of twelve rigid body-parts whose motions are almost periodic and subject to dynamic constraints. The resulting representation is compact and composed of the motion, shape, and texture for each of the body-parts. We apply the representation to regenerate the original sequence and to synthesize articulated 3D human actions.
[1] José M. F. Moura,et al. Tracking human walking in dynamic scenes , 1997, Proceedings of International Conference on Image Processing.
[2] Daniel Thalmann,et al. A real time anatomical converter for human motion capture , 1996 .
[3] José M. F. Moura,et al. Automatic Recognition of Human Walking in Monocular Image Sequences , 1998, J. VLSI Signal Process..
[4] José M. F. Moura,et al. Content-based video sequence representation , 1995, Proceedings., International Conference on Image Processing.