Automatic facial animation generation system of dancing characters considering emotion in dance and music
暂无分享,去创建一个
In recent years, a lot of 3D character dance animation movies are created by amateur users using 3DCG animation editing tools (e.g. MikuMikuDance). Whereas, most of them are created manually. Then automatic facial animation system for dancing character will be useful to create dance movies and visualize impressions effectively. Therefore, we address the challenging theme to estimate dancing character's emotions (we call "dance emotion"). In previous work considering music features, DiPaola et al. [2006] proposed music-driven emotionally expressive face system. To detect the mood of the input music, they used a hierarchical framework (Thayer model), and achieved to generate facial animation that matches music emotion. However, their model can't express subtleties of emotion between two emotions because input music divided into few moods sharply using Gaussian mixture model. In addition, they decide more detailed moods based on the psychological rules that uses score information, so they requires MIDI data. In this paper, we propose "dance emotion model" to visualize dancing character's emotion as facial expression. Our model is built by the coordinate information frame by frame on the emotional space through perceptional experiment using music and dance motion database without MIDI data. Moreover, by considering the displacement on the emotional space, we can express not only a certain emotion but also subtleties of emotions. As the result, our system got a higher accuracy comparing with the previous work. We can create the facial expression result soon by inputting audio data and synchronized motion. It is shown the utility through the comparison with previous work in Figure 1.
[1] Steve DiPaola,et al. Emotional remapping of music to facial animation , 2006, Sandbox '06.