A nonlinear manifold learning framework for real-time motion estimation using low-cost sensors

We propose a real-time motion synthesis framework to control the animation of 3D avatar in real-time. Instead of relying on motion capture device as the control signal, we use low-cost and ubiquitously available 3D accelerometer sensors. The framework is developed under a data-driven fashion, which includes two steps: model learning from existing high quality motion database, and motion synthesis from the control signal. In the model learning step, we apply a non-linear manifold learning method to establish a high dimensional motion model which learned from a large motion capture database. Then, by taking 3D accelerometer sensor signal as input, we are able to synthesize high-quality motion from the motion model we learned from the previous step. The system is performing in real-time, which make it available to a wide range of interactive applications, such as character control in 3D virtual environments and occupational training.

[1]  Jessica K. Hodgins,et al.  Synthesizing physically realistic human motion in low-dimensional, behavior-specific spaces , 2004, ACM Trans. Graph..

[2]  Hans-Peter Seidel,et al.  Performance capture from sparse multi-view video , 2008, ACM Trans. Graph..

[3]  A. Elgammal,et al.  Inferring 3D body pose from silhouettes using activity manifold learning , 2004, CVPR 2004.

[4]  J. Tenenbaum,et al.  A global geometric framework for nonlinear dimensionality reduction. , 2000, Science.

[5]  Wojciech Matusik,et al.  Practical motion capture in everyday surroundings , 2007, ACM Trans. Graph..

[6]  M. Naderi Think globally... , 2004, HIV prevention plus!.

[7]  Dinesh K. Pai,et al.  FootSee: an interactive animation system , 2003, SCA '03.

[8]  Daniel Thalmann,et al.  Interactive low‐dimensional human motion synthesis by combining motion models and PIK , 2007, Comput. Animat. Virtual Worlds.

[9]  Luc Van Gool,et al.  Multi-activity Tracking in LLE Body Pose Space , 2007, Workshop on Human Motion.

[10]  Jessica K. Hodgins,et al.  Performance animation from low-dimensional control signals , 2005, ACM Trans. Graph..

[11]  Mohammed Yeasin,et al.  Comparison of linear and non-linear data projection techniques in recognizing universal facial expressions , 2005, Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005..

[12]  S T Roweis,et al.  Nonlinear dimensionality reduction by locally linear embedding. , 2000, Science.

[13]  Martin D. Buhmann,et al.  Radial Basis Functions: Theory and Implementations: Preface , 2003 .

[14]  Geoffrey E. Hinton,et al.  A Desktop Input Device and Interface for Interactive 3D Character Animation , 2002, Graphics Interface.

[15]  Wei Wang,et al.  Human motion estimation from a reduced marker set , 2006, I3D '06.

[16]  Norman I. Badler,et al.  Real-Time Control of a Virtual Human Using Minimal Sensors , 1993, Presence: Teleoperators & Virtual Environments.

[17]  Francis K. H. Quek,et al.  Data-driven motion estimation with low-cost sensors , 2008 .

[18]  Ahmed M. Elgammal,et al.  Inferring 3D body pose from silhouettes using activity manifold learning , 2004, Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004..

[19]  Jessica K. Hodgins,et al.  Synthesizing physically realistic human motion in low-dimensional, behavior-specific spaces , 2004, SIGGRAPH 2004.

[20]  Lawrence K. Saul,et al.  Think Globally, Fit Locally: Unsupervised Learning of Low Dimensional Manifold , 2003, J. Mach. Learn. Res..

[21]  Mira Dontcheva,et al.  Layered acting for character animation , 2003, ACM Trans. Graph..

[22]  Daniel Thalmann,et al.  Interactive low-dimensional human motion synthesis by combining motion models and PIK , 2007 .