Image based arabic sign language recognition

In this paper we propose an image based system for Arabic Sign Language recognition. The recognition stage is performed using a Hidden Markov Model. We have used a Gaussian skin color model to detect the signer’s face. The detected face region is then used as a reference to track the hands movement using region growing from the sequence of images comprising the signs. A number of features are then selected from the detected hand regions across the sequence of images. Such features are then used as input to the HMM. The proposed system achieved a recognition accuracy of 98% for a data set of 50 signs.

[1]  Cheng-Chang Lien,et al.  Sign language recognition using 3-D Hopfield neural network , 1995, Proceedings., International Conference on Image Processing.

[2]  Kazuyoshi Yoshino,et al.  Recognition of Japanese sign language from image sequence using color combination , 1996, Proceedings of 3rd IEEE International Conference on Image Processing.

[3]  Alexander H. Waibel,et al.  A real-time face tracker , 1996, Proceedings Third IEEE Workshop on Applications of Computer Vision. WACV'96.

[4]  Shan Lu,et al.  Recognition of local features for camera-based sign language recognition system , 2000, Proceedings 15th International Conference on Pattern Recognition. ICPR-2000.

[5]  Alex Pentland,et al.  Real-time American Sign Language recognition from video using hidden Markov models , 1995 .

[6]  Alex Bateman,et al.  An introduction to hidden Markov models. , 2007, Current protocols in bioinformatics.

[7]  Lawrence R. Rabiner,et al.  A tutorial on hidden Markov models and selected applications in speech recognition , 1989, Proc. IEEE.