Assessing the deaf user perspective on sign language avatars

Signing avatars have the potential to become a useful and even cost-effective method to make written content more accessible for Deaf people. However, avatar research is characterized by the fact that most researchers are not members of the Deaf community, and that Deaf people as potential users have little or no knowledge about avatars. Therefore, we suggest two well-known methods, focus groups and online studies, as a two-way information exchange between research and the Deaf community. Our aim was to assess signing avatar acceptability, shortcomings of current avatars and potential use cases. We conducted two focus group interviews (N=8) and, to quantify important issues, created an accessible online user study(N=317). This paper deals with both the methodology used and the elicited opinions and criticism. While we found a positive baseline response to the idea of signing avatars, we also show that there is a statistically significant increase in positive opinion caused by participating in the studies. We argue that inclusion of Deaf people on many levels will foster acceptance as well as provide important feedback regarding key aspects of avatar technology that need to be improved.

[1]  Vincenzo Lombardo,et al.  A Virtual Interpreter for the Italian Sign Language , 2010, IVA.

[2]  Kristen Shinohara,et al.  Observing Sara: a case study of a blind person's interactions with technology , 2007, Assets '07.

[3]  Annelies Braffort Research on Computer Science and Sign Language: Ethical Aspects , 2001, Gesture Workshop.

[4]  Tara Matthews,et al.  Visualizing non-speech sounds for the deaf , 2005, Assets '05.

[5]  Richard E. Ladner,et al.  A web-based user survey for evaluating power saving strategies for deaf users of mobileASL , 2010, ASSETS '10.

[6]  Sung-Eun Hong,et al.  Elicitation Methods in the DGS (German Sign Language) Corpus Project , 2009 .

[7]  Matt Huenerfauth,et al.  Evaluation of American Sign Language Generation by Native ASL Signers , 2008, TACC.

[8]  Alexis Héloir,et al.  Sign Language Avatars: Animation and Comprehensibility , 2011, IVA.

[9]  James A. Landay,et al.  Can you see what i hear?: the design and evaluation of a peripheral sound display for the deaf , 2003, CHI '03.

[10]  John R. W. Glauert,et al.  Providing signed content on the Internet by synthesized animation , 2007, TCHI.

[11]  Matt Huenerfauth,et al.  Sign Language in the Interface , 2009, The Universal Access Handbook.

[12]  Donna M. Mertens,et al.  Focus Group Design and Group Dynamics: Lessons from Deaf and Hard of Hearing Participants , 1999 .

[13]  Matt Huenerfauth,et al.  A Linguistically Motivated Model for Speed and Pausing in Animations of American Sign Language , 2009, TACC.

[14]  Richard E. Ladner,et al.  MobileASL:: intelligibility of sign language video as constrained by mobile phone technology , 2006, Assets '06.

[15]  Richard E. Ladner,et al.  ClassInFocus: enabling improved visual attention strategies for deaf and hard of hearing students , 2009, Assets '09.

[16]  B. Einspruch Deaf in America: Voices From a Culture , 1989 .