A continuous Chinese sign language recognition system

We describe a system for recognizing both the isolated and continuous Chinese sign language (CSL) using two cybergloves and two 3SAPCE-position trackers as gesture input devices. To get robust gesture features, each joint-angle collected by cybergloves is normalized. The relative position and orientation of the left hand to those of the right hand are proposed as the signer position-independent features. To speed up the recognition process, fast match and frame prediction techniques are proposed. To tackle the epenthesis movement problem, context-dependent models are obtained by the dynamic programming (DP) technique. HMM are utilized to model basic word units. Then we describe training techniques of the bigram language model and the search algorithm used in our baseline system. The baseline system converts sentence level gestures into synthesis speech and gestures of a 3D virtual human synchronously. Experiments show that these techniques are efficient both in recognition speed and recognition performance.

[1]  Dimitris N. Metaxas,et al.  ASL recognition based on a coupling between HMMs and 3D motion analysis , 1998, Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271).

[2]  Dimitris N. Metaxas,et al.  Adapting hidden Markov models for ASL recognition by using three-dimensional computer vision methods , 1997, 1997 IEEE International Conference on Systems, Man, and Cybernetics. Computational Cybernetics and Simulation.

[3]  Kirsti Grobel,et al.  Video-Based Sign Language Recognition Using Hidden Markov Models , 1997, Gesture Workshop.

[4]  Geoffrey E. Hinton,et al.  Glove-Talk: a neural network interface between a data-glove and a speech synthesizer , 1993, IEEE Trans. Neural Networks.

[5]  Thad Starner,et al.  Visual Recognition of American Sign Language Using Hidden Markov Models. , 1995 .

[6]  Kouichi Murakami,et al.  Gesture recognition using recurrent neural networks , 1991, CHI.

[7]  Ming Ouhyoung,et al.  A real-time continuous gesture recognition system for sign language , 1998, Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition.

[8]  A E Marble,et al.  Image processing system for interpreting motion in American Sign Language. , 1992, Journal of biomedical engineering.

[9]  Alan Wexelblat,et al.  A feature-based approach to continuous-gesture analysis , 1994 .

[10]  Mohammed Waleed Kadous,et al.  Machine Recognition of Auslan Signs Using PowerGloves: Towards Large-Lexicon Recognition of Sign Lan , 1996 .

[11]  P. B. Coaker,et al.  Applied Dynamic Programming , 1964 .

[12]  Tomoichi Takahashi,et al.  Hand gesture coding based on experiments using a hand gesture interface device , 1991, SGCH.

[13]  Avinash C. Kak,et al.  Automatic learning of assembly tasks using a DataGlove system , 1995, Proceedings 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human Robot Interaction and Cooperative Robots.

[14]  Geoffrey E. Hinton,et al.  Glove-TalkII: Mapping Hand Gestures to Speech Using Neural Networks , 1994, NIPS.

[15]  Slava M. Katz,et al.  Estimation of probabilities from sparse data for the language model component of a speech recognizer , 1987, IEEE Trans. Acoust. Speech Signal Process..

[16]  Dimitris N. Metaxas,et al.  Toward Scalability in ASL Recognition: Breaking Down Signs into Phonemes , 1999, Gesture Workshop.