Signspeak--understanding, recognition, and translation of sign languages

The SignSpeak project will be the first step to approach sign language recognition and translation at a scientific level already reached in similar research fields such as automatic speech recognition or statistical machine translation of spoken languages. Deaf communities revolve around sign languages as they are their natural means of communication. Although deaf, hard of hearing and hearing signers can communicate without problems amongst themselves, there is a serious challenge for the deaf community in trying to integrate into educational, social and work environments. The overall goal of SignSpeak is to develop a new vision-based technology for recognizing and translating continuous sign language to text. New knowledge about the nature of sign language structure from the perspective of machine recognition of continuous sign language will allow a subsequent breakthrough in the development of a new vision-based technology for continuous sign language recognition and translation. Existing and new publicly available corpora will be used to evaluate the research progress throughout the whole project.

[1]  Hermann Ney,et al.  Spoken language processing techniques for sign language recognition and translation , 2008, Technology and Disability.

[2]  Hermann Ney,et al.  Enhancing a Sign Language Translation System with Vision-Based Features , 2009, Gesture Workshop.

[3]  Tony Veale,et al.  The Challenges of Cross-Modal Translation: English-to-Sign-Language Translation in the Zardoz System , 1998, Machine Translation.

[4]  W. Stokoe Sign Language Structure , 1980 .

[5]  Philippe Dreuw Continuous Sign Language Recognition Approaches from Speech Recognition , 2006 .

[6]  Andy Way,et al.  Hand in hand: automatic sign language to English translation , 2007, TMI.

[7]  Hermann Ney,et al.  Efficient approximations to model-based joint tracking and recognition of continuous sign language , 2008, 2008 8th IEEE International Conference on Automatic Face & Gesture Recognition.

[8]  Hermann Ney,et al.  Best practice for sign language data collections regarding the needs of data-driven recognition and translation , 2010, LREC 2010.

[9]  Andy Way,et al.  The ATIS Sign Language Corpus , 2008, LREC.

[10]  Andrew Zisserman,et al.  Learning sign language by watching TV (using weakly aligned subtitles) , 2009, CVPR.

[11]  Hermann Ney,et al.  Morpho-syntax Based Statistical Methods for Sign Language Translation vorgelegt von : Cand , 2006 .

[12]  Surendra Ranganath,et al.  Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning , 2005, IEEE Trans. Pattern Anal. Mach. Intell..

[13]  Ali Farhadi,et al.  Aligning ASL for Statistical Translation Using a Discriminative Word Model , 2006, 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06).

[14]  Ying Wu,et al.  Vision-Based Gesture Recognition: A Review , 1999, Gesture Workshop.

[15]  David Windridge,et al.  A Linguistic Feature Vector for the Visual Interpretation of Sign Language , 2004, ECCV.

[16]  Wei Du,et al.  Video analysis for continuous sign language recognition , 2010 .

[17]  Bencie Woll,et al.  Sharing sign language data online: experiences from the ECHO project , 2007 .

[18]  Hermann Ney,et al.  The RWTH statistical machine translation system for the IWSLT 2006 evaluation , 2006, IWSLT.

[19]  Dimitris N. Metaxas,et al.  A Framework for Recognizing the Simultaneous Aspects of American Sign Language , 2001, Comput. Vis. Image Underst..

[20]  Hermann Ney,et al.  Speech recognition techniques for a sign language recognition system , 2007, INTERSPEECH.

[21]  W. Stokoe,et al.  Sign language structure: an outline of the visual communication systems of the American deaf. 1960. , 1961, Journal of deaf studies and deaf education.

[22]  Andy Way,et al.  An Example-Based Approach to Translating Sign Language , 2005, MTSUMMIT.

[23]  Hermann Ney,et al.  Benchmark Databases for Video-Based Automatic Sign Language Recognition , 2008, LREC.

[24]  Hermann Ney,et al.  Fast Search for Large Vocabulary Speech Recognition , 2000 .

[25]  Helen Cooper,et al.  Learning signs from subtitles: A weakly supervised approach to sign language recognition , 2009, CVPR.

[26]  Karl-Friedrich Kraiss,et al.  Towards a Video Corpus for Signer-Independent Continuous Sign Language Recognition , 2007 .

[27]  Ruiduo Yang,et al.  Enhanced Level Building Algorithm for the Movement Epenthesis Problem in Sign Language Recognition , 2007, 2007 IEEE Conference on Computer Vision and Pattern Recognition.

[28]  Rubén San-Segundo-Hernández,et al.  A Spanish speech to sign language translation system for assisting deaf-mute people , 2006, INTERSPEECH.

[29]  Chung-Hsien Wu,et al.  Joint Optimization of Word Alignment and Epenthesis Generation for Chinese to Taiwanese Sign Synthesis , 2007 .