Accurate and Accessible Motion-Capture Glove Calibration for Sign Language Data Collection

Motion-capture recordings of sign language are used in research on automatic recognition of sign language or generation of sign language animations, which have accessibility applications for deaf users with low levels of written-language literacy. Motion-capture gloves are used to record the wearer’s handshape. Unfortunately, they require a time-consuming and inexact calibration process each time they are worn. This article describes the design and evaluation of a new calibration protocol for motion-capture gloves, which is designed to make the process more efficient and to be accessible for participants who are deaf and use American Sign Language (ASL). The protocol was evaluated experimentally; deaf ASL signers wore the gloves, were calibrated (using the new protocol and using a calibration routine provided by the glove manufacturer), and were asked to perform sequences of ASL handshapes. Five native ASL signers rated the correctness and understandability of the collected handshape data. In an additional evaluation, ASL signers were asked to perform ASL stories while wearing the gloves and a motion-capture bodysuit (in some cases our new calibration protocol was used, in other cases, the standard protocol). Later, twelve native ASL signers watched animations produced from this motion-capture data and answered comprehension questions about the stories. In both evaluation studies, the new protocol received significantly higher scores than the standard calibration. The protocol has been made freely available online, and it includes directions for the researcher, images and videos of how participants move their hands during the process, and directions for participants (as ASL videos and English text).

[1]  Matt Huenerfauth,et al.  A Linguistically Motivated Model for Speed and Pausing in Animations of American Sign Language , 2009, TACC.

[2]  Neff Walker,et al.  Evaluation of the CyberGlove as a whole-hand input device , 1995, TCHI.

[3]  Mark Wells,et al.  Tessa, a system to aid communication with deaf people , 2002, ASSETS.

[4]  Kevin Montgomery,et al.  Using Registration, Calibration, and Robotics to Build a More Accurate Virtual Reality Simulation for Astronaut Training and Telemedicine , 2003, WSCG.

[5]  Te-Shun Chou,et al.  Hand-Eye: A Vision-Based Approach to Data Glove Calibration , 2000 .

[6]  Paul Lukowicz,et al.  Using multiple sensors for mobile sign language recognition , 2003, Seventh IEEE International Symposium on Wearable Computers, 2003. Proceedings..

[7]  Silvestro Micera,et al.  Functional assessment of hand orthopedic disorders using a sensorised glove: preliminary results , 2003, 2003 IEEE International Conference on Robotics and Automation (Cat. No.03CH37422).

[8]  Gerd Hirzinger,et al.  Learning techniques in a dataglove based telemanipulation system for the DLR hand , 1998, Proceedings. 1998 IEEE International Conference on Robotics and Automation (Cat. No.98CH36146).

[9]  C. B. Traxler,et al.  The Stanford Achievement Test, 9th Edition: National Norming and Performance Standards for Deaf and Hard-of-Hearing Students. , 2000, Journal of deaf studies and deaf education.

[10]  Gabriel Zachmann,et al.  'Visual-fidelity' dataglove calibration , 2004 .

[11]  Yung-Hui Lee,et al.  Taiwan sign language (TSL) recognition based on 3D data and neural networks , 2009, Expert Syst. Appl..

[12]  W. J. Greenleaf,et al.  Developing the tools for practical VR applications [Medicine] , 1996 .

[13]  Mark R. Cutkosky,et al.  Calibration and Mapping of a Human Hand for Dexterous Telemanipulation , 2000, Dynamic Systems and Control: Volume 2.

[14]  P. Dario,et al.  Evaluation of an instrumented glove for hand-movement acquisition. , 2003, Journal of rehabilitation research and development.

[15]  Dimitris N. Metaxas,et al.  Handshapes and movements: Multiple-channel ASL recognition , 2004 .

[16]  Ming Ouhyoung,et al.  A real-time continuous gesture recognition system for sign language , 1998, Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition.

[17]  Gabriel Zachmann,et al.  'Visual-fidelity' dataglove calibration , 2004, Proceedings Computer Graphics International, 2004..

[18]  Z. Zenn Bien,et al.  A dynamic gesture recognition system for the Korean sign language (KSL) , 1996, IEEE Trans. Syst. Man Cybern. Part B.

[19]  Matt Huenerfauth,et al.  Improving Spatial Reference in American Sign Language Animation through Data Collection from Native ASL Signers , 2009, HCI.

[20]  Abdul Nasser S. Abu-Rezq,et al.  Arabic glove-talk (AGT): A communication aid for vocally impaired , 1998, Pattern Analysis and Applications.

[21]  Mounir Mokhtari,et al.  Modeling of the Residual Capability for People with Severe Motor Disabilities: Analysis of Hand Posture , 2005, User Modeling.

[22]  R. Mitchell,et al.  How Many People Use ASL in the United States? Why Estimates Need Updating , 2006 .

[23]  W. J. Greenleaf,et al.  DEVELOPING THE TOOLS FOR PRACTICAL VR APPLICATIONS , 1996 .

[24]  Matt Huenerfauth,et al.  Collecting a Motion-Capture Corpus of American Sign Language for Data-Driven Generation Research , 2010, SLPAT@NAACL.

[25]  Wen Gao,et al.  An approach based on phonemes to large vocabulary Chinese sign language recognition , 2002, Proceedings of Fifth IEEE International Conference on Automatic Face Gesture Recognition.

[26]  D. Hose,et al.  A Goniometric Glove for Clinical Hand Assessment , 2000, Journal of hand surgery.

[27]  Zeungnam Bien,et al.  Real-time recognition system of Korean sign language based on elementary components , 1997, Proceedings of 6th International Fuzzy Systems Conference.

[28]  Peter Vamplew Recognition of sign language gestures using neural networks , 1996 .

[29]  Pengfei Lu,et al.  Modeling animations of American Sign Language verbs through motion-capture of native ASL signers , 2010, ASAC.

[30]  Matt Huenerfauth,et al.  Accessible motion-capture glove calibration protocol for recording sign language data from deaf subjects , 2009, Assets '09.

[31]  Ian Marshall,et al.  Linguistic modelling and language-processing technologies for Avatar-based sign language presentation , 2008, Universal Access in the Information Society.