American sign language (ASL) recognition based on Hough transform and neural networks

Abstract The work presented in this paper aims to develop a system for automatic translation of static gestures of alphabets and signs in American sign language. In doing so, we have used Hough transform and neural networks which is trained to recognize signs. Our system does not rely on using any gloves or visual markings to achieve the recognition task. Instead, it deals with images of bare hands, which allows the user to interact with the system in a natural way. An image is processed and converted to a feature vector that will be compared with the feature vectors of a training set of signs. The extracted features are not affected by the rotation, scaling or translation of the gesture within the image, which makes the system more flexible. The system was implemented and tested using a data set of 300 samples of hand sign images; 15 images for each sign. Experiments revealed that our system was able to recognize selected ASL signs with an accuracy of 92.3%.

[1]  Eli Hagen,et al.  Towards an American Sign Language interface , 1994 .

[2]  Simon Haykin,et al.  Neural Networks: A Comprehensive Foundation , 1998 .

[3]  Kirsti Grobel,et al.  Video-Based Sign Language Recognition Using Hidden Markov Models , 1997, Gesture Workshop.

[4]  Geoffrey E. Hinton,et al.  Glove-Talk: a neural network interface between a data-glove and a speech synthesizer , 1993, IEEE Trans. Neural Networks.

[5]  Peter Vamplew,et al.  Recognition of sign language using neural networks , 1996 .

[6]  A E Marble,et al.  Image processing system for interpreting motion in American Sign Language. , 1992, Journal of biomedical engineering.

[7]  Alan Wexelblat,et al.  A feature-based approach to continuous-gesture analysis , 1994 .

[8]  R. Watson A Survey of Gesture RecognitionTechniques. , 1993 .

[9]  Thomas B. Moeslund,et al.  Real-time recognition of hand alphabet gestures using principal component analysis , 1997 .

[10]  Dean Rubine,et al.  Specifying gestures by example , 1991, SIGGRAPH.

[11]  Narendra Ahuja,et al.  Recognizing hand gesture using motion trajectories , 1999, Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149).

[12]  Irfan Essa,et al.  Causal Analysis for Visual Gesture Understanding , 1995 .

[13]  Ming Ouhyoung,et al.  A sign language recognition system using hidden markov model and context sensitive search , 1996, VRST.

[14]  KwangYun Wohn,et al.  Recognition of space-time hand-gestures using hidden Markov model , 1996, VRST.

[15]  Rafael C. González,et al.  Digital image processing using MATLAB , 2006 .

[16]  Thad Starner,et al.  Visual Recognition of American Sign Language Using Hidden Markov Models. , 1995 .

[17]  Surendra Ranganath,et al.  Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning , 2005, IEEE Trans. Pattern Anal. Mach. Intell..

[18]  Eugene Charniak,et al.  Statistical language learning , 1997 .

[19]  Richard O. Duda,et al.  Use of the Hough transformation to detect lines and curves in pictures , 1972, CACM.

[20]  David Zeltzer,et al.  A survey of glove-based input , 1994, IEEE Computer Graphics and Applications.

[21]  Jaron Lanier,et al.  A hand gesture interface device , 1987, CHI 1987.

[22]  David W. Aha,et al.  Instance-Based Learning Algorithms , 1991, Machine Learning.

[23]  Shan Lu,et al.  Recognition of local features for camera-based sign language recognition system , 2000, Proceedings 15th International Conference on Pattern Recognition. ICPR-2000.

[24]  Geoffrey E. Hinton,et al.  Glove-TalkII: Mapping Hand Gestures to Speech Using Neural Networks , 1994, NIPS.

[25]  Dimitris N. Metaxas,et al.  Adapting hidden Markov models for ASL recognition by using three-dimensional computer vision methods , 1997, 1997 IEEE International Conference on Systems, Man, and Cybernetics. Computational Cybernetics and Simulation.

[26]  Roni Rosenfeld,et al.  Learning Hidden Markov Model Structure for Information Extraction , 1999 .

[27]  Tomoichi Takahashi,et al.  Hand gesture coding based on experiments using a hand gesture interface device , 1991, SGCH.

[28]  Guido H. G. Joachim International bibliography of sign language , 1993 .

[29]  Jérôme Martin,et al.  An Appearance-Based Approach to Gesture-Recognition , 1997, ICIAP.

[30]  Thomas S. Huang,et al.  Gesture modeling and recognition using finite state machines , 2000, Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580).

[31]  Kunihiko Fukushima,et al.  Analysis of the process of visual pattern recognition by the neocognitron , 1989, Neural Networks.

[32]  John F. Canny,et al.  A Computational Approach to Edge Detection , 1986, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[33]  Matthew Turk,et al.  View-based interpretation of real-time optical flow for gesture recognition , 1998, Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition.

[34]  Alan Wexelblat,et al.  An approach to natural gesture in virtual environments , 1995, TCHI.

[35]  Mubarak Shah,et al.  Establishing motion correspondence , 1991, CVGIP Image Underst..

[36]  Ming Ouhyoung,et al.  A real-time continuous gesture recognition system for sign language , 1998, Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition.

[37]  Taku Komura,et al.  Computing inverse kinematics with linear programming , 2005, VRST '05.

[38]  Kouichi Murakami,et al.  Gesture recognition using recurrent neural networks , 1991, CHI.

[39]  Mansoor Sarhadi,et al.  A non-linear model of shape and motion for tracking finger spelt American sign language , 2002, Image Vis. Comput..