Combining body sensors and visual sensors for motion training

We present a new framework to build motion training systems using machine learning techniques. The goal of our approach is the design of a training method based on the combination of body and visual sensors. We introduce the concept of a Motion Chunk to analyze human motion and construct a motion data model in real-time. The system provides motion detection and evaluation and visual feedback generation. We discuss the results of user tests regarding the system efficiency in martial art training. With our system, trainers can generate motion training videos and practice complex motions precisely evaluated by a computer.

[1]  David Alan Becker,et al.  Sensei, a real-time recognition, feedback and training system for T'ai chi gestures , 1997 .

[2]  Steven Dow,et al.  Continuous sensing of gesture for control of audio-visual media , 2003, Seventh IEEE International Symposium on Wearable Computers, 2003. Proceedings..

[3]  Seungyong Lee,et al.  Motion retargeting and evaluation for VR-based training of free motions , 2003, The Visual Computer.

[4]  Svetha Venkatesh,et al.  Hierarchical recognition of intentional human gestures for sports video annotation , 2002, Object recognition supported by user interaction for service robots.

[5]  KimGerard Jounghyun,et al.  Motion retargeting and evaluation for VR-based training of free motions , 2003 .

[6]  Ning Hu,et al.  Training for physical tasks in virtual environments: Tai Chi , 2003, IEEE Virtual Reality, 2003. Proceedings..

[7]  Gerard Jounghyun Kim,et al.  Implementation and Evaluation of Just Follow Me: An Immersive, VR-Based, Motion-Training System , 2002, Presence: Teleoperators & Virtual Environments.

[8]  Lawrence R. Rabiner,et al.  A tutorial on hidden Markov models and selected applications in speech recognition , 1989, Proc. IEEE.

[9]  Thad Starner,et al.  Visual Recognition of American Sign Language Using Hidden Markov Models. , 1995 .

[10]  Ling Bao,et al.  Activity Recognition from User-Annotated Acceleration Data , 2004, Pervasive.

[11]  Andreas Kunz,et al.  blue-c: a spatially immersive display and 3D video portal for telepresence , 2003, ACM Trans. Graph..

[12]  Yoichi Takebayashi,et al.  Sound Feedback for Powerful Karate Training , 2004, NIME.

[13]  James W. Davis,et al.  Virtual PAT: A Virtual Personal Aerobics Trainer , 1998 .

[14]  Jake K. Aggarwal,et al.  Human motion: modeling and recognition of actions and interactions , 2004, Proceedings. 2nd International Symposium on 3D Data Processing, Visualization and Transmission, 2004. 3DPVT 2004..

[15]  Gaurav S. Sukhatme,et al.  Connecting the Physical World with Pervasive Networks , 2002, IEEE Pervasive Comput..

[16]  Luc Van Gool,et al.  Blue-c: a spatially immersive display and 3D video portal for telepresence , 2003, IPT/EGVE.