Effect of Displaying Human Videos During an Evaluation Study of American Sign Language Animation

Many researchers internationally are studying how to synthesize computer animations of sign language; such animations have accessibility benefits for people who are deaf and have lower literacy in written languages. The field has not yet formed a consensus as to how to best conduct evaluations of the quality of sign language animations, and this article explores an important methodological issue for researchers conducting experimental studies with participants who are deaf. Traditionally, when evaluating an animation, some lower and upper baselines are shown for comparison during the study. For the upper baseline, some researchers use carefully produced animations, and others use videos of human signers. Specifically, this article investigates, in studies where signers view animations of sign language and are asked subjective and comprehension questions, whether participants differ in their subjective and comprehension responses when actual videos of human signers are shown during the study. Through three sets of experiments, we characterize how the Likert-scale subjective judgments of participants about sign language animations are negatively affected when they are also shown videos of human signers for comparison -- especially when displayed side-by-side. We also identify a small positive effect on the comprehension of sign language animations when studies also contain videos of human signers. Our results enable direct comparison of previously published evaluations of sign language animations that used different types of upper baselines -- video or animation. Our results also provide methodological guidance for researchers who are designing evaluation studies of sign language animation or designing experimental stimuli or questions for participants who are deaf.

[1]  Matt Huenerfauth,et al.  Modeling and synthesizing spatially inflected verbs for American sign language animations , 2010, ASSETS '10.

[2]  Pengfei Lu,et al.  Synthesizing American Sign Language Spatially Inflected Verbs from Motion-Capture Data , 2011 .

[3]  Carol O'Sullivan,et al.  Perceptual evaluation of human animation timewarping , 2010, SA '10.

[4]  Sophie Jörg,et al.  Evaluating the emotional content of human motions on real and virtual characters , 2008, APGV '08.

[5]  Alexis Héloir,et al.  Sign Language Avatars: Animation and Comprehensibility , 2011, IVA.

[6]  Rosalee Wolfe,et al.  Synthetic Corpora: A Synergy of Linguistics and Computer Animation , 2010 .

[7]  Matt Huenerfauth,et al.  Learning a Vector-Based Model of American Sign Language Inflecting Verbs from Motion-Capture Data , 2012, SLPAT@HLT-NAACL.

[8]  José Colás Pasamontes,et al.  The Synthesis of LSE Classifiers: From Representation to Evaluation , 2011, J. Univers. Comput. Sci..

[9]  Michael Russell,et al.  Computer-Based Signing Accommodations: Comparing a Recorded Human with an Avatar , 2009 .

[10]  Robin L. Thompson,et al.  Eye gaze during comprehension of American Sign Language by native and beginning signers. , 2009, Journal of deaf studies and deaf education.

[11]  Matt Huenerfauth,et al.  Collecting and evaluating the CUNY ASL corpus for research on American Sign Language animation , 2014, Comput. Speech Lang..

[12]  Ian Marshall,et al.  Linguistic modelling and language-processing technologies for Avatar-based sign language presentation , 2008, Universal Access in the Information Society.

[13]  R. Mitchell,et al.  How Many People Use ASL in the United States? Why Estimates Need Updating , 2006 .

[14]  Nicolas Courty,et al.  The SignCom system for data-driven animation of interactive virtual signers , 2011, ACM Trans. Interact. Intell. Syst..

[15]  Kazunari Morimoto,et al.  Improvements and Evaluations in Sign Animation Used as Instructions for Stomach X-Ray Examination , 2006, ICCHP.

[16]  Alexis Héloir,et al.  Assessing the deaf user perspective on sign language avatars , 2011, ASSETS.

[17]  Mitchell P. Marcus,et al.  Generating american sign language classifier predicates for english-to-asl machine translation , 2006 .

[18]  T. Kurokawa,et al.  Design of an Agent to Represent Japanese Sign Language for Hearing-Impaired People in Stomach X-ray Inspection , 2003 .

[19]  Donald J. Schuirmann A comparison of the Two One-Sided Tests Procedure and the Power Approach for assessing the equivalence of average bioavailability , 1987, Journal of Pharmacokinetics and Biopharmaceutics.

[20]  Matt Huenerfauth,et al.  Sign Language in the Interface , 2009, The Universal Access Handbook.

[22]  Kirsten Bergmann,et al.  The production of co-speech iconic gestures: Empirical study and computational simulation with virtual agents , 2012 .

[23]  Matt Huenerfauth,et al.  Effect of spatial reference and verb inflection on the usability of sign language animations , 2011, Universal Access in the Information Society.

[24]  Andrew Rosenberg,et al.  Evaluating importance of facial expression in american sign language and pidgin signed english animations , 2011, ASSETS.

[25]  Rosalee Wolfe,et al.  SignQUOTE: A Remote Testing Facility for Eliciting Signed Qualitative Feedback , 2011 .

[26]  Jerry Schnepp,et al.  Improving deaf accessibility in remote usability testing , 2011, ASSETS '11.

[27]  Matt Huenerfauth,et al.  Evaluation of American Sign Language Generation by Native ASL Signers , 2008, TACC.

[28]  Siew Hock Ow User Evaluation of an Electronic Malaysian Sign Language Dictionary: e-Sign Dictionary , 2009, Comput. Inf. Sci..

[29]  Mariët Theune,et al.  Judging Laura: Perceived Qualities of a Mediated Human Versus an Embodied Agent , 2005, IVA.

[30]  C. B. Traxler,et al.  The Stanford Achievement Test, 9th Edition: National Norming and Performance Standards for Deaf and Hard-of-Hearing Students. , 2000, Journal of deaf studies and deaf education.

[31]  John R. W. Glauert,et al.  Providing signed content on the Internet by synthesized animation , 2007, TCHI.

[32]  Matt Huenerfauth,et al.  Evaluating Facial Expressions in American Sign Language Animations for Accessible Online Information , 2013, HCI.

[33]  Pengfei Lu,et al.  Effect of presenting video as a baseline during an american sign language animation user study , 2012, ASSETS '12.

[34]  Eva Cerezo,et al.  Automatic Translation System to Spanish Sign Language with a Virtual Interpreter , 2009, INTERACT.

[35]  Mel Slater,et al.  The impact of eye gaze on communication using humanoid avatars , 2001, CHI.

[36]  Matt Huenerfauth,et al.  Collecting a Motion-Capture Corpus of American Sign Language for Data-Driven Generation Research , 2010, SLPAT@NAACL.

[37]  Kostas Karpouzis,et al.  A knowledge-based sign synthesis architecture , 2008, Universal Access in the Information Society.