Automatic Synthesis of Training Data for Sign Language Recognition Using HMM

The paper describes a method of synthesizing sign language samples for training HMM. First face and hands regions are detected, and then features of sign language are extracted. For generating HMM, training data are automatically synthesized from a limited number of actual samples. We focus on the common hand shape in different word. The database hand shapes is generated and the training data of each word is synthesized by replacing the same shape in the database. Experiments using real image sequences are shown

[1]  Tomoki Toda,et al.  Unit selection algorithm for Japanese speech synthesis based on both phoneme unit and diphone unit , 2002, 2002 IEEE International Conference on Acoustics, Speech, and Signal Processing.

[2]  Masaru Takeuchi,et al.  A method for recognizing a sequence of sign language words represented in a Japanese sign language sentence , 2000, Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580).

[3]  Tomoko Sakiyama,et al.  Pattern Recognition and Synthesis for a Sign Language Translation System , 1996, J. Vis. Lang. Comput..