The significance of facial features for automatic sign language recognition

Although facial features are considered to be essential for humans to understand sign language, no prior research work has yet examined their significance for automatic sign language recognition or presented some evaluation results. This paper describes a vision-based recognition system that employs both manual and facial features, extracted from the same input image. For facial feature extraction an active appearance model is applied to identify areas of interest such as the eyes and mouth region. Afterwards a numerical description of facial expression and lip outline is computed. An extensive evaluation was performed on a new sign language corpus, which contains continuous articulations of 25 native signers. The obtained results proved the importance of integrating facial expressions into the classification process. The recognition rates for isolated and continuous signing increased in signer-dependent as well as in signer-independent operation mode. Interestingly, roughly two of ten signs were recognized just from the facial features.

[1]  Britta Bauer Erkennung kontinuierlicher Gebärdensprache mit Untereinheiten-Modellen , 2004 .

[2]  Timothy F. Cootes,et al.  Active Appearance Models , 1998, ECCV.

[3]  Ayush S Parashar,et al.  Representation and Interpretation of Manual and Non-Manual Information for Automated American Sign Language Recognition , 2003 .

[4]  Karl-Friedrich Kraiss,et al.  Towards a Video Corpus for Signer-Independent Continuous Sign Language Recognition , 2007 .

[5]  Christian Vogler Analysis of Facial Expressions in American Sign Language , 2005 .

[6]  Ulrich Canzler Nicht-intrusive Mimikanalyse , 2005 .

[7]  Stan Sclaroff,et al.  Automatic detection of relevant head gestures in American Sign Language communication , 2002, Object recognition supported by user interaction for service robots.

[8]  Hermann Hienz Erkennung kontinuierlicher Gebärdensprache mit Ganzwortmodellen , 2000 .

[9]  Morteza Zahedi,et al.  Robust appearance based sign language recognition , 2007 .

[10]  Karl-Friedrich Kraiss,et al.  Recent developments in visual sign language recognition , 2008, Universal Access in the Information Society.

[11]  Karl-Friedrich Kraiss,et al.  Robust Person-Independent Visual Sign Language Recognition , 2005, IbPRIA.

[12]  Narendra Ahuja,et al.  Extraction of 2D Motion Trajectories and Its Application to Hand Gesture Recognition , 2002, IEEE Trans. Pattern Anal. Mach. Intell..

[13]  Dimitris N. Metaxas,et al.  Parallel hidden Markov models for American sign language recognition , 1999, Proceedings of the Seventh IEEE International Conference on Computer Vision.

[14]  Karl-Friedrich Kraiss,et al.  Advanced Man-Machine Interaction , 2006 .

[15]  Surendra Ranganath,et al.  Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning , 2005, IEEE Trans. Pattern Anal. Mach. Intell..