Sign language is used as the means of communication by the speech-impaired. It is a language which uses hand gestures and movements to convey meaning. Several studies have attempted to use computers to recognize hand gestures using either a data glove, which uses several sensors to determine hand pose and attitude, or a vision based system, which uses a camera to determine the hand position and gestures made. We introduce a system that combines a prototype data glove with computer vision in order to translate Filipino Sign Language for medical purposes. The system uses the Viterbi algorithm to find the best gesture fit from the HMM modeling. Three training sets were used to validate the systems’ accuracy of recognition. The first set focused on the alphabet and numbers and achieved a recognition accuracy of 71.8%. The second training set was composed of health-care related words, which produced a recognition rate of 80.6%. The third set comprised the full vocabulary of 26 letters of the alphabet, numbers zero to nine, and 30 words commonly used in health-care related work, resulting in an accuracy of 80.55%.
[1]
Jaron Lanier,et al.
A hand gesture interface device
,
1987,
CHI 1987.
[2]
Yu Kai-wen.
On the Statistical Variance
,
2012
.
[3]
M. Inés Torres,et al.
Comparative Study of the Baum-Welch and Viterbi Training Algorithms Applied to Read and Spontaneous Speech Recognition
,
2003,
IbPRIA.
[4]
Tracy L. Westeyn,et al.
Georgia tech gesture toolkit: supporting experiments in gesture recognition
,
2003,
ICMI '03.
[5]
Loraine J. DiPietro.
Registry of Interpreters for the Deaf.
,
1970
.
[6]
Sachiko Ohta,et al.
Impact of miscommunication in medical dispute cases in Japan.
,
2007,
International journal for quality in health care : journal of the International Society for Quality in Health Care.