Real-Time Gesture Recognition, Evaluation and Feed-Forward Correction of a Multimodal Tai-Chi Platform

This paper presents a multimodal system capable to understand and correct in real-time the movements of Tai-Chi students through the integration of audio-visual-tactile technologies. This platform acts like a virtual teacher that transfers the knowledge of five Tai-Chi movements using feed-back stimuli to compensate the errors committed by a user during the performance of the gesture. The fundamental components of this multimodal interface are the gesture recognition system (using k-means clustering, Probabilistic Neural Networks (PNN) and Finite State Machines (FSM)) and the real-time descriptor of motion which is used to compute and qualify the actual movements performed by the student respect to the movements performed by the master, obtaining several feedbacks and compensating this movement in real-time varying audio-visualtactile parameters of different devices. The experiments of this multimodal platform have confirmed that the quality of the movements performed by the students is improved significantly.

[1]  F BobickAaron,et al.  A State-Based Approach to the Representation and Recognition of Gesture , 1997 .

[2]  R. Farivar Book review: Review of The Mind within the Net: Models of Learning, Thinking, and Acting , 2001 .

[3]  Ning Hu,et al.  Training for physical tasks in virtual environments: Tai Chi , 2003, IEEE Virtual Reality, 2003. Proceedings..

[4]  J. D. Farmer,et al.  State space reconstruction in the presence of noise" Physica D , 1991 .

[5]  Aaron F. Bobick,et al.  A State-Based Approach to the Representation and Recognition of Gesture , 1997, IEEE Trans. Pattern Anal. Mach. Intell..

[6]  R. Byrne,et al.  Priming primates: Human and otherwise , 1998, Behavioral and Brain Sciences.

[7]  Anil K. Jain,et al.  Data clustering: a review , 1999, CSUR.

[8]  Loren Olson,et al.  A gesture-driven multimodal interactive dance system , 2004, 2004 IEEE International Conference on Multimedia and Expo (ICME) (IEEE Cat. No.04TH8763).

[9]  Vladimir Pavlovic,et al.  Toward multimodal human-computer interface , 1998, Proc. IEEE.

[10]  Norman I. Badler,et al.  Virtual Training via Vibrotactile Arrays , 2008, PRESENCE: Teleoperators and Virtual Environments.

[11]  Antonella De Angeli,et al.  Integration and synchronization of input modes during multimodal human-computer interaction , 1997, CHI.

[12]  Alexander G. Hauptmann,et al.  Gestures with Speech for Graphic Manipulation , 1993, Int. J. Man Mach. Stud..

[13]  Cynthia Breazeal,et al.  Development of a Wearable Vibrotactile Feedback Suit for Accelerated Human Motor Learning , 2007, Proceedings 2007 IEEE International Conference on Robotics and Automation.

[14]  Yoav Ben-Shlomo,et al.  The Mind within the Net: Models of Learning, Thinking and Acting , 2000, BMJ : British Medical Journal.

[15]  Sharon Oviatt,et al.  User-centered modeling and evaluation of multimodal interfaces , 2003, Proc. IEEE.

[16]  Ivan Marsic,et al.  A system for medical consultation and education using multimodal human/machine communication , 1998, IEEE Transactions on Information Technology in Biomedicine.

[17]  T. Furness,et al.  Perception of virtual auditory shapes , 1994 .

[18]  Thomas S. Huang,et al.  Gesture modeling and recognition using finite state machines , 2000, Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580).