User-independent recognition of Arabic sign language for facilitating communication with the deaf community

This paper presents a solution for user-independent recognition of isolated Arabic sign language gestures. The video-based gestures are preprocessed to segment out the hands of the signer based on color segmentation of the colored gloves. The prediction errors of consecutive segmented images are then accumulated into two images according to the directionality of the motion. Different accumulation weights are employed to further help preserve the directionality of the projected motion. Normally, a gesture is represented by hand movements; however, additional user-dependent head and body movements might be present. In the user-independent mode we seek to filter out such user-dependent information. This is realized by encapsulating the movements of the segmented hands in a bounding box. The encapsulated images of the projected motion are then transformed into the frequency domain using Discrete Cosine Transformation (DCT). Feature vectors are formed by applying Zonal coding to the DCT coefficients with varying cutoff values. Classification techniques such as KNN and polynomial classifiers are used to assess the validity of the proposed user-independent feature extraction schemes. An average classification rate of 87% is reported.

[1]  Rafael C. González,et al.  Digital image processing using MATLAB , 2006 .

[2]  Khaled Assaleh,et al.  Telescopic Vector Composition and Polar Accumulated Motion Residuals for Feature Extraction in Arabic Sign Language Recognition , 2007, EURASIP J. Image Video Process..

[3]  N. Ahmed,et al.  Discrete Cosine Transform , 1996 .

[4]  Z. Zenn Bien,et al.  A dynamic gesture recognition system for the Korean sign language (KSL) , 1996, IEEE Trans. Syst. Man Cybern. Part B.

[5]  Shan Lu,et al.  The Recognition Algorithm with Non-contact for Japanese Sign Language Using Morphological Analysis , 1997, Gesture Workshop.

[6]  William M. Campbell,et al.  Speaker recognition with polynomial classifiers , 2002, IEEE Trans. Speech Audio Process..

[7]  Tamer Shanableh,et al.  Spatio-Temporal Feature-Extraction Techniques for Isolated Gesture Recognition in Arabic Sign Language , 2007, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[8]  Khaled Assaleh,et al.  Recognition of Arabic Sign Language Alphabet Using Polynomial Classifiers , 2005, EURASIP J. Adv. Signal Process..

[9]  Daniel Schneider,et al.  Rapid Signer Adaptation for Isolated Sign Language Recognition , 2006, 2006 Conference on Computer Vision and Pattern Recognition Workshop (CVPRW'06).

[10]  Alex Pentland,et al.  Real-Time American Sign Language Recognition Using Desk and Wearable Computer Based Video , 1998, IEEE Trans. Pattern Anal. Mach. Intell..

[11]  Wen Gao,et al.  A Real-Time Large Vocabulary Continuous Recognition System for Chinese Sign Language , 2001, IEEE Pacific Rim Conference on Multimedia.

[12]  Mohammed Waleed Kadous,et al.  Machine Recognition of Auslan Signs Using PowerGloves: Towards Large-Lexicon Recognition of Sign Lan , 1996 .

[13]  Karl-Friedrich Kraiss,et al.  Robust Person-Independent Visual Sign Language Recognition , 2005, IbPRIA.

[14]  Gene H. Golub,et al.  Matrix computations , 1983 .

[15]  Wen Gao,et al.  Signer-Independent Continuous Sign Language Recognition Based on SRN/HMM , 2001, Gesture Workshop.

[16]  William M. Campbell,et al.  Low-complexity small-vocabulary speech recognition for portable devices , 1999, ISSPA '99. Proceedings of the Fifth International Symposium on Signal Processing and its Applications (IEEE Cat. No.99EX359).

[17]  Richard Bowden,et al.  Large Lexicon Detection of Sign Language , 2007, ICCV-HCI.

[18]  William M. Campbell,et al.  Speaker identification using a polynomial-based classifier , 1999, ISSPA '99. Proceedings of the Fifth International Symposium on Signal Processing and its Applications (IEEE Cat. No.99EX359).