Generating Data for Signer Adaptation

In sign language recognition (SLR), one of the problems is signer adaptation. Different from spoken language, there are lots of "phonemes" in sign language. It is not convenient to collect enough data to adapt the system to a new signer. A method of signer adaptation with little data for continuous density hidden Markov models (HMMs) is presented. Firstly, hand shapes, positions and orientations that compose all sign words are extracted with clustering algorithm. They are regarded as basic units. Based on a small number of sign words that include these basic units, the adaptation data of all sign words are generated. Statistics are gathered from the generated data and used to calculate a linear regression-based transformation for the mean vectors. To verify the effectiveness of the proposed method, some experiments are carried out on a vocabulary with 350 sign words in Chinese Sign Language (CSL). All basic units of hand shape, position and orientation are found. With these units, we generate the adaptation data of 350 sign words. Experimental results demonstrate that the proposed method has similar performance compared with that using the original samples of 350 sign words as adaptation data.

[1]  Thad Starner,et al.  Visual Recognition of American Sign Language Using Hidden Markov Models. , 1995 .

[2]  Ming Ouhyoung,et al.  A real-time continuous gesture recognition system for sign language , 1998, Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition.

[3]  Philip C. Woodland,et al.  Maximum likelihood linear regression for speaker adaptation of continuous density hidden Markov models , 1995, Comput. Speech Lang..

[4]  KwangYun Wohn,et al.  Recognition of space-time hand-gestures using hidden Markov model , 1996, VRST.

[5]  Thad Starner,et al.  Towards a one-way American sign language translator , 2004, Sixth IEEE International Conference on Automatic Face and Gesture Recognition, 2004. Proceedings..

[6]  Antonio Camurri,et al.  Gesture-Based Communication in Human-Computer Interaction , 2003, Lecture Notes in Computer Science.

[7]  Dimitris N. Metaxas,et al.  Toward Scalability in ASL Recognition: Breaking Down Signs into Phonemes , 1999, Gesture Workshop.

[8]  Ipke Wachsmuth,et al.  Gesture and Sign Language in Human-Computer Interaction , 1998, Lecture Notes in Computer Science.

[9]  Steve Young,et al.  The HTK book version 3.4 , 2006 .

[10]  Kirsti Grobel,et al.  Video-Based Sign Language Recognition Using Hidden Markov Models , 1997, Gesture Workshop.

[11]  Geoffrey E. Hinton,et al.  Glove-Talk: a neural network interface between a data-glove and a speech synthesizer , 1993, IEEE Trans. Neural Networks.

[12]  Wen Gao,et al.  A Real-Time Large Vocabulary Recognition System for Chinese Sign Language , 2001, Gesture Workshop.

[13]  Geoffrey E. Hinton,et al.  Glove-TalkII: Mapping Hand Gestures to Speech Using Neural Networks , 1994, NIPS.

[14]  Peter Vamplew Recognition of sign language gestures using neural networks , 1996 .

[15]  Wen Gao,et al.  Sign Language Recognition Based on HMM/ANN/DP , 2000, Int. J. Pattern Recognit. Artif. Intell..