Non-manual cues in automatic sign language recognition

Present work deals with the incorporation of non-manual cues in automatic sign language recognition. More specifically, eye gaze, head pose, and facial expressions are discussed in relation to their grammatical and syntactic function and means of including them in the recognition phase are investigated. Computer vision issues related to extracting facial features, eye gaze, and head pose cues are presented and classification approaches for incorporating these non-manual cues into the overall Sign Language recognition architecture are introduced.

[1]  José C. Segura,et al.  HMM-based continuous sign language recognition using a fast optical flow parameterization of visual information , 2006, INTERSPEECH.

[2]  Marco La Cascia,et al.  Fast, Reliable Head Tracking under Varying Illumination: An Approach Based on Registration of Texture-Mapped 3D Models , 2000, IEEE Trans. Pattern Anal. Mach. Intell..

[3]  Salvatore Gaglio,et al.  A Framework for Sign Language Sentence Recognition by Commonsense Context , 2007, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews).

[4]  Siome Goldenstein,et al.  Facial movement analysis in ASL , 2007, Universal Access in the Information Society.

[5]  Shan Lu,et al.  Recognition of local features for camera-based sign language recognition system , 2000, Proceedings 15th International Conference on Pattern Recognition. ICPR-2000.

[6]  David Windridge,et al.  A Linguistic Feature Vector for the Visual Interpretation of Sign Language , 2004, ECCV.

[7]  Karl-Friedrich Kraiss,et al.  Robust Person-Independent Visual Sign Language Recognition , 2005, IbPRIA.

[8]  Ulrike Rosa Wrobel Referenz in Gebärdensprachen: Raum und Person , 2001 .

[9]  H. Wallbott Bodily expression of emotion , 1998 .

[10]  Mu-Chun Su,et al.  A fuzzy rule-based approach to spatio-temporal hand gesture recognition , 2000, IEEE Trans. Syst. Man Cybern. Part C.

[11]  Richard Bowden,et al.  Large Lexicon Detection of Sign Language , 2007, ICCV-HCI.

[12]  Peter S. Maybeck,et al.  Stochastic Models, Estimation And Control , 2012 .

[13]  Hermann Ney,et al.  Appearance-Based Recognition of Words in American Sign Language , 2005, IbPRIA.

[14]  Ioannis Pitas,et al.  Detection of facial characteristics based on edge information , 2007, VISAPP.

[15]  Hermann Ney,et al.  Benchmark Databases for Video-Based Automatic Sign Language Recognition , 2008, LREC.

[16]  Philippe Dreuw Continuous Sign Language Recognition Approaches from Speech Recognition , 2006 .

[17]  Andrew Zisserman,et al.  Minimal Training, Large Lexicon, Unconstrained Sign Language Recognition , 2004, BMVC.

[18]  Hermann Ney,et al.  Spoken language processing techniques for sign language recognition and translation , 2008, Technology and Disability.

[19]  K. Margaritis,et al.  A Performance Study of a Recognition System for Greek Sign Language Alphabet Letters , 2004 .

[20]  Wen Gao,et al.  A vision-based sign language recognition system using tied-mixture density HMM , 2004, ICMI '04.

[21]  Bencie Woll,et al.  Sharing sign language data online: experiences from the ECHO project , 2007 .

[22]  Kostas Karpouzis,et al.  Robust Feature Detection for Facial Expression Recognition , 2007, EURASIP J. Image Video Process..

[23]  Kostas Karpouzis,et al.  Feature Extraction and Selection for Inferring User Engagement in an HCI Environment , 2009, HCI.

[24]  Constantine Stephanidis,et al.  Universal access in the information society , 1999, HCI.

[25]  Loïc Kessous,et al.  Multimodal user’s affective state analysis in naturalistic interaction , 2010, Journal on Multimodal User Interfaces.

[26]  Kirsti Grobel,et al.  Video-Based Sign Language Recognition Using Hidden Markov Models , 1997, Gesture Workshop.

[27]  Wen Gao,et al.  Sign Language Recognition from Homography , 2006, 2006 IEEE International Conference on Multimedia and Expo.

[28]  Olga Fischer,et al.  From Sign to Signing. Iconicity in Language and Literature 3 , 2003 .

[29]  Takeo Kanade,et al.  An Iterative Image Registration Technique with an Application to Stereo Vision , 1981, IJCAI.

[30]  Wen Gao,et al.  An approach based on phonemes to large vocabulary Chinese sign language recognition , 2002, Proceedings of Fifth IEEE International Conference on Automatic Face Gesture Recognition.

[31]  Kostas Karpouzis,et al.  Non-Verbal Feedback on User Interest Based on Gaze Direction and Head Pose , 2007 .

[32]  Ulrich Canzler,et al.  Extraction of Non Manual Features for Videobased Sign Language Recognition , 2002, MVA.

[33]  Yung-Hui Lee,et al.  Taiwan sign language (TSL) recognition based on 3D data and neural networks , 2009, Expert Syst. Appl..

[34]  S. Kollias,et al.  Synthesizing Gesture Expressivity Based on Real Sequences , 2006 .

[35]  K. Scherer,et al.  Cues and channels in emotion recognition. , 1986 .

[36]  Wen Gao,et al.  Vision-Based Sign Language Recognition Using Sign-Wise Tied Mixture HMM , 2004, PCM.

[37]  Robert W. Lindeman,et al.  A new instrumented approach for translating American Sign Language into sound and text , 2004, Sixth IEEE International Conference on Automatic Face and Gesture Recognition, 2004. Proceedings..

[38]  Tamer Shanableh,et al.  Spatio-Temporal Feature-Extraction Techniques for Isolated Gesture Recognition in Arabic Sign Language , 2007, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[39]  Wen Gao,et al.  Signer-independent sign language recognition based on SOFM/HMM , 2001, Proceedings IEEE ICCV Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems.