CNN Model for American Sign Language Recognition

This paper proposes a model based on convolutional neural network for hand gesture recognition and classification. The dataset uses 26 different hand gestures, which map to English alphabets A–Z. Standard dataset called Hand Gesture Recognition available in Kaggle website has been considered in this paper. The dataset contains 27,455 images (size 28 * 28) of hand gestures made by different people. Deep learning technique is used based on CNN which automatically learns and extracts features for classifying each gesture. The paper does comparative study with four recent works. The proposed model reports 99% test accuracy.

[1]  Yangho Ji,et al.  Sign Language Learning System with Image Sampling and Convolutional Neural Network , 2017, 2017 First IEEE International Conference on Robotic Computing (IRC).

[2]  Jie Huang,et al.  Video-based Sign Language Recognition without Temporal Segmentation , 2018, AAAI.

[3]  Luca Maria Gambardella,et al.  Max-pooling convolutional neural networks for vision-based hand gesture recognition , 2011, 2011 IEEE International Conference on Signal and Image Processing Applications (ICSIPA).

[4]  Benjamin Schrauwen,et al.  Sign Language Recognition Using Convolutional Neural Networks , 2014, ECCV Workshops.

[5]  Onno Crasborn,et al.  Signed languages and globalization , 2011, Language in Society.

[6]  Robin R. Murphy,et al.  Hand gesture recognition with depth images: A review , 2012, 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication.

[7]  Dhananjay Kalbande,et al.  Sign Language Recognition Using Deep Learning on Custom Processed Static Gesture Images , 2018, 2018 International Conference on Smart City and Emerging Technology (ICSCET).