Subunit Modeling for Japanese Sign Language Recognition Based on Phonetically Depend Multi-stream Hidden Markov Models

We work on automatic Japanese sign Language (JSL) recognition using Hidden Markov Model (HMM). An important issue for modeling sign is that how to determine the constituent element of sign (i.e., subunit) like "phoneme" in spoken language. We focused on special feature of sign language that JSL is composed of three types of phonological elements which is hand local information, position, and movement. In this paper, we propose an efficiently method of generating subunit using multi-stream HMM which is correspond to phonological elements. An isolated word recognition experiment has confirmed the effectiveness of our proposed method.

[1]  Yasuo Horiuchi,et al.  Sign Language Recognition Based on Position and Movement Using Multi-Stream HMM , 2008, 2008 Second International Symposium on Universal Communication.

[2]  Ariga Koki,et al.  Sign Language Recognition Considering Signer and Motion Diversity Using HMM , 2010 .

[3]  Moritz Knorr,et al.  The significance of facial features for automatic sign language recognition , 2008, 2008 8th IEEE International Conference on Automatic Face & Gesture Recognition.

[4]  Antonio Camurri,et al.  Gesture-Based Communication in Human-Computer Interaction , 2003, Lecture Notes in Computer Science.

[5]  M. Turk,et al.  Eigenfaces for Recognition , 1991, Journal of Cognitive Neuroscience.

[6]  Karl-Friedrich Kraiss,et al.  Video-based sign recognition using self-organizing subunits , 2002, Object recognition supported by user interaction for service robots.

[7]  Horiuchi Yasuo,et al.  Sign Language Recognition Based on Position and Movement Using Hidden Markov Model , 2008 .

[8]  Dimitris N. Metaxas,et al.  Handshapes and Movements: Multiple-Channel American Sign Language Recognition , 2003, Gesture Workshop.