American Sign Language Recognition using Hidden Markov Models and Wearable Motion Sensors

In this paper, we propose an efficient and non-invasive solution to translate American Sign Language (ASL) to speech utilizing two wearable armbands called Myo. The compact Myo armbands that are used in this study are much more practical than existing solutions, which include glove-based techniques, camera-based systems, and the use of 3D depth sensors. We applied the Gaussian Mixture Model Hidden Markov Model (GMM-HMM) technique to achieve classification rates of up to 96.15% for ASL words (gestures). The HMM-based approach also sets a solid foundation for future work on the system, which includes continuous ASL recognition as well as signer independence.

[1]  A E Marble,et al.  Image processing system for interpreting motion in American Sign Language. , 1992, Journal of biomedical engineering.

[2]  Ching-Hua Chuan,et al.  American Sign Language Recognition Using Leap Motion Sensor , 2014, 2014 13th International Conference on Machine Learning and Applications.

[3]  L. Rabiner,et al.  An introduction to hidden Markov models , 1986, IEEE ASSP Magazine.

[4]  Wu jiangqin,et al.  A simple sign language recognition system based on data glove , 1998, ICSP '98. 1998 Fourth International Conference on Signal Processing (Cat. No.98TH8344).

[5]  Tharadevi. R.V,et al.  Hand Gesture Recognition using Sign Language , 2017 .

[6]  J. L. Raheja,et al.  Indian sign language recognition using SVM , 2016, Pattern Recognition and Image Analysis.

[7]  Aliaa A. A. Youssif,et al.  Arabic Sign Language (ArSL) Recognition System Using HMM , 2011 .

[8]  Honggang Wang,et al.  American Sign Language Recognition Using Multi-dimensional Hidden Markov Models , 2006, J. Inf. Sci. Eng..

[9]  Kongqiao Wang,et al.  A Framework for Hand Gesture Recognition Based on Accelerometer and EMG Sensors , 2011, IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans.

[10]  Wen Gao,et al.  Signer-independent sign language recognition based on SOFM/HMM , 2001, Proceedings IEEE ICCV Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems.

[11]  S. D. Thepade,et al.  Sign language recognition using color means of gradient slope magnitude edge images , 2013, 2013 International Conference on Intelligent Systems and Signal Processing (ISSP).

[12]  Shinichi Tamura,et al.  Recognition of sign language motion images , 1988, Pattern Recognit..

[13]  Ayan Banerjee,et al.  SCEPTRE: A Pervasive, Non-Invasive, and Programmable Gesture Recognition Technology , 2016, IUI.

[14]  Kitsuchart Pasupa,et al.  Sign language recognition with microsoft Kinect's depth and colour sensors , 2015, 2015 IEEE International Conference on Signal and Image Processing Applications (ICSIPA).

[15]  Alex Pentland,et al.  Real-time American Sign Language recognition from video using hidden Markov models , 1995 .

[16]  Alex Waibel,et al.  Stochastically-Based Semantic Analysis , 1999 .

[17]  Rafia Mumtaz,et al.  Deaf talk using 3D animated sign language: A sign language interpreter using Microsoft's kinect v2 , 2016, 2016 SAI Computing Conference (SAI).