Toward a mobile service for hard of hearing people to make information accessible anywhere

Deaf and hard of hearing people can find it difficult to follow the rapid pace of our daily life. This problem is due to the lack of services that increase access to information. Regarding hearing impairment there is no specific solutions to make information accessible anywhere. Although this community has very specific needs related to the learning and understanding process of any written language. However, hearing impairment is an invisible disability which is quite frequent: it is estimated that more than 8% of world's population suffers from hearing loss. According to many studies reading level of hearing impaired students is lower than reading level of hearing students. In fact many deaf people have difficulties with reading and writing; they cannot read and understand all the information found in a newspaper, in vending machine to take a conveyance, in instruction leaflet etc... Mainly all visual textual information are not accessible for this category of people with disabilities. However, a number of obstacles still have to be removed to make the information really accessible for all and this is crucial for their personal development and their successful integration. In this paper we propose a solution to this problem by providing a mobile translation system using the great technological advances in smart phones to improve the information accessibility anywhere. We rely on text image processing, virtual reality 3D modeling and cloud computing to generate a real-time sign language interpretation by using high virtual character quality.

[1]  Denis Bouchard Sign Languages & Language Universals: The Status of Order & Position in Grammar , 2013 .

[2]  Kostas Karpouzis,et al.  Educational resources and implementation of a Greek sign language synthesis architecture , 2007, Comput. Educ..

[3]  R. Pfau,et al.  Sign Languages , 2011 .

[4]  Matt Huenerfauth,et al.  Collecting a Motion-Capture Corpus of American Sign Language for Data-Driven Generation Research , 2010, SLPAT@NAACL.

[5]  Mohamed Jemni,et al.  3D Motion Trajectory Analysis Approach to Improve Sign Language 3D-based Content Recognition , 2012, INNS-WC.

[6]  Norman I. Badler,et al.  A machine translation system from English to American Sign Language , 2000, AMTA.

[7]  Mohamed Jemni,et al.  Toward Automatic Sign Language Recognition from Web3D Based Scenes , 2010, ICCHP.

[8]  Mohamed Jemni,et al.  Statistical Sign Language Machine Translation: from English written text to American Sign Language Gloss , 2011, ArXiv.

[9]  Alexis Héloir,et al.  Formalisme de description des gestes de la langue des signes française pour la génération du mouvement de signeurs virtuels [French Sign Language Gesture Description Formalism for the Generation of Virtual Signer Motion] , 2007, TAL.

[10]  Mohamed Jemni,et al.  Sign Language MMS to Make Cell Phones Accessible to the Deaf and Hard-of-hearing Community , 2007, CVHI.

[11]  Matt Huenerfauth,et al.  Evaluation of a psycholinguistically motivated timing model for animations of american sign language , 2008, Assets '08.

[12]  Sarah Florence Taub,et al.  Language from the Body: Iconicity and Metaphor in American Sign Language , 2001 .

[13]  Mohamed Jemni,et al.  A Route Planner Interpretation Service for Hard of Hearing People , 2012, ICCHP.

[14]  Ian Marshall,et al.  Linguistic modelling and language-processing technologies for Avatar-based sign language presentation , 2008, Universal Access in the Information Society.

[15]  Kazem Taghva,et al.  The Effects of OCR Error on the Extraction of Private Information , 2006, Document Analysis Systems.

[16]  Mohamed Jemni,et al.  Mobile sign language translation system for deaf community , 2012, W4A.

[17]  Mohamed Jemni,et al.  A Web-Based Tool to Create Online Courses for Deaf Pupils , 2007 .

[18]  Mohamed Jemni,et al.  A System to Make Signs Using Collaborative Approach , 2008, ICCHP.