Brazilian Sign Language Recognition Using Kinect

The simultaneous-sequential nature of sign language production, which employs hand gestures and body motions combined with facial expressions, still challenges sign language recognition algorithms. This paper presents a method to recognize Brazilian Sign Language (Libras) using Kinect. Skeleton information is used to segment sign gestures from a continuous stream, while depth information is used to provide distinctive features. The method was assessed in a new data-set of 107 medical signs selected from common dialogues in health-care centers. The dynamic time warping–nearest neighbor (DTW-kNN) classifier using the leave-one-out cross-validation strategy reported outstanding results.

[1]  Reinhard Koch,et al.  Technical Foundation and Calibration Methods for Time-of-Flight Cameras , 2013, Time-of-Flight and Depth Imaging.

[2]  Edwin Escobedo,et al.  Finger Spelling Recognition from Depth data using Direction Cosines and Histogram of Cumulative Magnitudes , 2015 .

[3]  Meinard Müller,et al.  Information retrieval for music and motion , 2007 .

[4]  Sebastian Feuerstack,et al.  A real-time system to recognize static gestures of Brazilian sign language (libras) alphabet using Kinect , 2012, IHC.

[5]  Ceil Lucas,et al.  Linguistics of American Sign Language: An Introduction , 1995 .

[6]  Benjamin Schrauwen,et al.  Sign Language Recognition Using Convolutional Neural Networks , 2014, ECCV Workshops.

[7]  Bill Triggs,et al.  Histograms of oriented gradients for human detection , 2005, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05).

[8]  Adilson Marques da Cunha,et al.  Recognizing the Brazilian Signs Language Alphabet with Neural Networks over Visual 3D Data Sensor , 2014, IBERAMIA.

[9]  Kevin P. Murphy,et al.  Machine learning - a probabilistic perspective , 2012, Adaptive computation and machine learning series.

[10]  Angelo C. Loula,et al.  Recognition of Static Gestures Applied to Brazilian Sign Language (Libras) , 2015, 2015 28th SIBGRAPI Conference on Graphics, Patterns and Images.

[11]  Ednaldo Brigante Pizzolato,et al.  Sign Language Recognition with Support Vector Machines and Hidden Conditional Random Fields: Going from Fingerspelling to Natural Articulated Words , 2013, MLDM.

[12]  Robin R. Murphy,et al.  Hand gesture recognition with depth images: A review , 2012, 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication.

[13]  E. Escobedo-Cardenas,et al.  A robust gesture recognition using hand local data and skeleton trajectory , 2015, 2015 IEEE International Conference on Image Processing (ICIP).

[14]  Richard Bowden,et al.  Sign Language Recognition , 2011, Visual Analysis of Humans.

[15]  Xilin Chen,et al.  Fast sign language recognition benefited from low rank approximation , 2015, 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG).

[16]  Guillermo Cámara Chávez,et al.  Finger Spelling Recognition from Depth Data Using Direction Cosines and Histogram of Cumulative Magnitudes , 2015, SIBGRAPI.

[17]  Ashok Kumar Sahoo,et al.  SIGN LANGUAGE RECOGNITION: STATE OF THE ART , 2014 .

[18]  Vassilis Athitsos,et al.  An integrated RGB-D system for looking up the meaning of signs , 2015, PETRA.

[19]  Mznah Al-Rodhaan,et al.  Vision-Based Sign Language Classification: A Directional Review , 2014 .