Real Time Static Gesture Detection Using Deep Learning

Sign gesture recognition is an important problem in human-computer interaction with significant societal influence. However, it is a very complex task, since sign gestures are naturally deformable objects. Gesture recognition contains unsolved problems for the last two decades, such as low accuracy or low speed, and despite many proposed methods, no perfect result has been found to explain these unsolved problems. In this paper, we propose a deep learning approach to translating sign gesture language into text. In this study, we have introduced a self-generated image data set for American Sign language (ASL). This dataset is a collection of 36 characters containing A to Z alphabets and 0 to 9 number digits. The proposed system can recognize static gestures. This system can learn and classify specific sign gestures of any person. A convolutional neural network (CNN) algorithm is proposed for classifying ASL images to text. An accuracy of 99% on the alphabet gestures and 100% accuracy on digits was achieved. This is the best accuracy compared to existing systems.