Text to Head Motion Synthesis for Chinese Sign Language Avatar

Head movement is an essential constituent of Chinese Sign Language (CSL), which helps to complete the definition of signing gestures and to assist in sending messages. Adding head motions into signing animations benefits for both the reality and the intelligibility. By analyzing the head motions both defined in words of CSL and captured from large motion data of a real signer performance, this paper proposes a quintuple for formalized head movement description. A Text To Head Motion (TTHM) synthesis model is established to perform a low-level semantic mapping from words to head gestures. Experimental results verify that improvement is achieved both in naturalness rating and understandability of signing animations after synthesizing with head motions.