Construction of 3-D emotion space based on parameterized faces

If a machine can treat 'kansei' information like emotion, the relation between human and machine would become more friendly. Our goal is to realize a natural human-machine communication environment by giving a face to the computer terminal or communication system. To realize this environment, it is necessary for the machine to recognize human's emotion condition appearing in the face, and synthesize the reasonable facial image against it. For this purpose, the machine should have emotion model based on parameterized faces which can map his or her emotion condition one by one to this space and can also map inversely in reply.<<ETX>>

[1]  Hiroshi Harashima,et al.  A Media Conversion from Speech to Facial Image for Intelligent Man-Machine Interface , 1991, IEEE J. Sel. Areas Commun..

[2]  H. Yamada Visual Information for Categorizing Facial Expression of Emotions , 1993 .

[3]  H. Yamada,et al.  Dimensions of visual information for categorizing facial expressions of emotion , 1993 .

[4]  Shigeo Morishima,et al.  Emotion space for analysis and synthesis of facial expression , 1993, Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication.

[5]  Shigeo Morishima,et al.  A facial image synthesis system for human-machine interface , 1992, [1992] Proceedings IEEE International Workshop on Robot and Human Communication.