Implementation and evaluation of animation controls sufficient for conveying ASL facial expressions

Technology to automatically synthesize linguistically accurate and natural-looking animations of American Sign Language (ASL) from an easy-to-update script would make it easier to add ASL content to websites and media, thereby increasing information accessibility for many people who are deaf. We are investigating the synthesis of ASL facial expressions, which are grammatically required and essential to the meaning of sentences. To support this research, we have enhanced a virtual human character with face controls following the MPEG-4 Facial Action Parameter standard. In a user-study, we determined that these controls were sufficient for conveying understandable animations of facial expressions.