Possibility theory based continuous Indian Sign Language gesture recognition

Today sign language is a vibrant field of research because it helps us to establish communication between hearing impaired community and normal community. In this paper, we proposed a novel continuous Indian Sign Language (ISL) gesture recognition technique where possibility theory (PT) has been applied. Preprocessing and extraction of overlapping frames (start and end point of each gesture) are the major issues which is being covered in this paper using background modeling and noble gradient method. Overlapping frames are helpful for fragmenting a continuous ISL gesture into isolated gestures. These isolated gestures are further processed and classified. During the segmentation process some of the gesture structures like shape and orientation of hand are deformed. A novel concept of wavelet descriptor has been applied here for extracting correct features of these deformed features and combine with the other two features (orientation and speed). Wavelet descriptor is very generic for finding a moment invariant features from the shape of the hand due to its multiresolution property. 3 dimensional feature vectors (orientation, speed and moment) are parellely combined and classified using possibility theory. Possibility theory is really suited for handling uncertainty as well as the precision present between intermediate frames. Experiments are performed on 10 sentences of continuous ISL having 1000 samples. This data set has been created in Robotics and AI laboratory, Indian Institute of Information Technology, Allahabad, India, where 4 persons are used for training and 6 persons are used for testing. From analysis of results we found that our proposed approach gives 92% classification results on continuous ISL. A classified isolated ISL gestures are combined for generating a judgment of conviction in the form of text or words.

[1]  Ming Ouhyoung,et al.  A real-time continuous gesture recognition system for sign language , 1998, Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition.

[2]  M.K. Bhuyan,et al.  Hand gesture animation by key frame extraction , 2011, 2011 International Conference on Image Information Processing.

[3]  Manas Kamal Bhuyan,et al.  Hand pose identification from monocular image for sign language recognition , 2011, 2011 IEEE International Conference on Signal and Image Processing Applications (ICSIPA).

[4]  G. C. Nandi,et al.  Indian Sign Language gesture recognition using Discrete Wavelet Packet Transform , 2014, 2014 International Conference on Signal Propagation and Computer Technology (ICSPCT 2014).

[5]  Gora Chand Nandi,et al.  Implementation of MFCC based hand gesture recognition on HOAP-2 using Webots platform , 2014, 2014 International Conference on Advances in Computing, Communications and Informatics (ICACCI).

[6]  Kwang-Hyun Park,et al.  Continuous gesture recognition system for Korean sign language based on fuzzy logic and hidden Markov model , 2002, 2002 IEEE World Congress on Computational Intelligence. 2002 IEEE International Conference on Fuzzy Systems. FUZZ-IEEE'02. Proceedings (Cat. No.02CH37291).

[7]  Gora Chand Nandi,et al.  Face liveness detection through face structure analysis , 2014, Int. J. Appl. Pattern Recognit..

[8]  Dinggang Shen,et al.  Discriminative wavelet shape descriptors for recognition of 2-D patterns , 1999, Pattern Recognit..

[9]  Gora Chand Nandi,et al.  Face recognition using facial symmetry , 2012, CCSEIT '12.