Towards an assistive tool for Greek sign language communication

People with disabilities face major problems in their daily lives in communicating with other people. Sign languages (SLs) are the basic means of communication between hearing-impaired people and also form their natural way of speaking. Systems that could act as interpreters between deaf people and people that do not speak SL, would facilitate the formers' life. Such systems would have to cope with bidirectional translation of sign language sentences and spoken language sentences. Various research works have been presented recently, concerning mostly SL recognition, rather than spoken language interpretation. The system we present, aims at Greek sign language (GSL) recognition, too, through the use of hidden Markov models (HMMs). The recognition rates we achieved for GSL sentences, formed out of a 33-sign vocabulary, exceed 86% and are quite promising.

[1]  Hermann Hienz,et al.  HMM-Based Continuous Sign Language Recognition Using Stochastic Grammars , 1999, Gesture Workshop.

[2]  A E Marble,et al.  Image processing system for interpreting motion in American Sign Language. , 1992, Journal of biomedical engineering.

[3]  Konstantinos G. Margaritis,et al.  On Greeg sign language alphabet character recognition: Using back-propagation neural networks , 2001, HERCMA.

[4]  Alex Pentland,et al.  A Wearable Computer Based American Sign Language Recognizer , 1997, SEMWEB.

[5]  Ming Ouhyoung,et al.  A sign language recognition system using hidden markov model and context sensitive search , 1996, VRST.

[6]  Ming Ouhyoung,et al.  A real-time continuous gesture recognition system for sign language , 1998, Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition.

[7]  M. B. Waldron,et al.  Isolated ASL sign recognition system for deaf persons , 1995 .

[8]  Ming Ouhyoung,et al.  A Real‐time Continuous Alphabetic Sign Language to Speech Conversion VR System , 1995, Comput. Graph. Forum.

[9]  Geoffrey E. Hinton,et al.  Glove-Talk: a neural network interface between a data-glove and a speech synthesizer , 1993, IEEE Trans. Neural Networks.

[10]  Alex Pentland,et al.  Real-time American Sign Language recognition from video using hidden Markov models , 1995 .

[11]  Omar M. Al-Jarrah,et al.  Recognition of gestures in Arabic sign language using neuro-fuzzy systems , 2001, Artif. Intell..

[12]  Steve Young,et al.  The HTK book , 1995 .

[13]  Alex Pentland,et al.  Real-Time American Sign Language Recognition Using Desk and Wearable Computer Based Video , 1998, IEEE Trans. Pattern Anal. Mach. Intell..

[14]  Robyn Owens,et al.  Hand movement classification using an adaptive fuzzy expert system , 1996 .

[15]  Dimitris N. Metaxas,et al.  ASL recognition based on a coupling between HMMs and 3D motion analysis , 1998, Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271).