A dynamic gesture interface for virtual environments based on hidden Markov models

A dynamic gesture interface for virtual environments based on hidden Markov models (HMMs) is introduced in this paper. The HMMs are employed to represent the continuous dynamic gestures, and their parameters are learned from the training data collected from the CyberGlove. To avoid the gesture spotting problem, we employed the standard deviation of the angle variation for each finger joint to describe the dynamic characters of the gestures. A prototype which applies 3 different dynamic gestures to control the rotation directions of a 3D cube is implemented to test the effectiveness of the proposed method.

[1]  Yangsheng Xu,et al.  Hidden Markov model approach to skill learning and its application to telerobotics , 1993, IEEE Trans. Robotics Autom..

[2]  Jin-Hyung Kim,et al.  An HMM-Based Threshold Model Approach for Gesture Recognition , 1999, IEEE Trans. Pattern Anal. Mach. Intell..

[3]  Kang-Hyun Jo,et al.  Manipulative hand gesture recognition using task knowledge for human computer interaction , 1998, Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition.

[4]  Yoshiaki Shirai,et al.  Hand gesture estimation and model refinement using monocular camera-ambiguity limitation by inequality constraints , 1998, Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition.

[5]  Mohammed Yeasin,et al.  Prosody based audiovisual coanalysis for coverbal gesture recognition , 2005, IEEE Transactions on Multimedia.

[6]  Ying Wu,et al.  Hand modeling, analysis and recognition , 2001, IEEE Signal Process. Mag..

[7]  Michael J. Black,et al.  Recognizing temporal trajectories using the condensation algorithm , 1998, Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition.

[8]  Kosuke Sato,et al.  Real-time gesture recognition by learning and selective control of visual interest points , 2005, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[9]  Yangsheng Xu,et al.  Gesture interface: modeling and learning , 1994, Proceedings of the 1994 IEEE International Conference on Robotics and Automation.

[10]  Hsiao-Wuen Hon,et al.  An overview of the SPHINX speech recognition system , 1990, IEEE Trans. Acoust. Speech Signal Process..

[11]  Dimitris N. Metaxas,et al.  ASL recognition based on a coupling between HMMs and 3D motion analysis , 1998, Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271).

[12]  Lawrence R. Rabiner,et al.  A tutorial on hidden Markov models and selected applications in speech recognition , 1989, Proc. IEEE.

[13]  Jiangwen Deng,et al.  An HMM-based approach for gesture segmentation and recognition , 2000, Proceedings 15th International Conference on Pattern Recognition. ICPR-2000.

[14]  Alex Pentland,et al.  A Wearable Computer Based American Sign Language Recognizer , 1997, SEMWEB.

[15]  Aaron F. Bobick,et al.  State-Based Recognition of Gesture , 1997 .

[16]  X. D. Huang,et al.  Phoneme classification using semicontinuous hidden Markov models , 1992, IEEE Trans. Signal Process..