Implementation and evaluation of animation controls sufficient for conveying ASL facial expressions
暂无分享,去创建一个
Technology to automatically synthesize linguistically accurate and natural-looking animations of American Sign Language (ASL) from an easy-to-update script would make it easier to add ASL content to websites and media, thereby increasing information accessibility for many people who are deaf. We are investigating the synthesis of ASL facial expressions, which are grammatically required and essential to the meaning of sentences. To support this research, we have enhanced a virtual human character with face controls following the MPEG-4 Facial Action Parameter standard. In a user-study, we determined that these controls were sufficient for conveying understandable animations of facial expressions.
[1] T. Pejsa,et al. Architecture of an animation system for human characters , 2009, 2009 10th International Conference on Telecommunications.
[2] Hermann Ney,et al. Enhancing gloss-based corpora with facial features using active appearance models , 2013 .
[3] R. Mitchell,et al. How Many People Use ASL in the United States? Why Estimates Need Updating , 2006 .
[4] Peter Cook,et al. Linguistics As Structure In Computer Animation: Toward A More Effective Synthesis Of Brow Motion In American Sign Language , 2011 .