Learning a Vector-Based Model of American Sign Language Inflecting Verbs from Motion-Capture Data

American Sign Language (ASL) synthesis software can improve the accessibility of information and services for deaf individuals with low English literacy. The synthesis component of current ASL animation generation and scripting systems have limited handling of the many ASL verb signs whose movement path is inflected to indicate 3D locations in the signing space associated with discourse referents. Using motion-capture data recorded from human signers, we model how the motion-paths of verb signs vary based on the location of their subject and object. This model yields a lexicon for ASL verb signs that is parameterized on the 3D locations of the verb's arguments; such a lexicon enables more realistic and understandable ASL animations. A new model presented in this paper, based on identifying the principal movement vector of the hands, shows improvement in modeling ASL verb signs, including when trained on movement data from a different human signer.

[1]  R. Mitchell,et al.  How Many People Use ASL in the United States? Why Estimates Need Updating , 2006 .

[2]  Scott K. Liddell Grammar, Gesture, and Meaning in American Sign Language , 2003 .

[3]  Kearsy Cormier,et al.  Modality and structure in signed and spoken languages: Frontmatter , 2002 .

[4]  Kearsy Cormier,et al.  Grammaticization of indexic signs : how American Sign Language expresses numerosity , 2002 .

[5]  Diane C. Lillo-Martin,et al.  Universal Grammar and American Sign Language: Setting the Null Argument Parameters , 1991 .

[6]  Matt Huenerfauth,et al.  Evaluation of American Sign Language Generation by Native ASL Signers , 2008, TACC.

[7]  Norman I. Badler,et al.  A machine translation system from English to American Sign Language , 2000, AMTA.

[8]  Pengfei Lu,et al.  Synthesizing American Sign Language Spatially Inflected Verbs from Motion-Capture Data , 2011 .

[9]  P. Siple,et al.  Theoretical issues in sign language research , 1990 .

[10]  Matt Huenerfauth,et al.  Modeling and synthesizing spatially inflected verbs for American sign language animations , 2010, ASSETS '10.

[11]  Rosalee Wolfe,et al.  Automatic verb agreement in computer synthesized depictions of american sign language , 2005 .

[12]  Mitchell P. Marcus,et al.  Generating american sign language classifier predicates for english-to-asl machine translation , 2006 .

[13]  Ian Marshall,et al.  Grammar Development for Sign Language Avatar-Based Synthesis , 2005 .

[14]  Annelies Braffort,et al.  Toward the Study of Sign Language Coarticulation: Methodology Proposal , 2009, 2009 Second International Conferences on Advances in Computer-Human Interactions.

[15]  E. Klima The signs of language , 1979 .

[16]  C. B. Traxler,et al.  The Stanford Achievement Test, 9th Edition: National Norming and Performance Standards for Deaf and Hard-of-Hearing Students. , 2000, Journal of deaf studies and deaf education.

[17]  Mark Wells,et al.  Tessa, a system to aid communication with deaf people , 2002, ASSETS.

[18]  Matt Huenerfauth,et al.  Effect of spatial reference and verb inflection on the usability of sign language animations , 2011, Universal Access in the Information Society.

[19]  Kostas Karpouzis,et al.  A knowledge-based sign synthesis architecture , 2008, Universal Access in the Information Society.

[20]  Ian Marshall,et al.  Linguistic modelling and language-processing technologies for Avatar-based sign language presentation , 2008, Universal Access in the Information Society.