Effect of emotional expression to gaze guidance using a face robot

This paper describes gaze guidance with emotional motion of a head robot, called Kamin-FA1. We propose to use not only the gaze control of the robot but also the facial expression combining with the head motion to guide a human beingpsilas gaze to the target. We provide the information of the gaze target intuitively to human based on the shared attention with the emotional communication robot Kamin-FA1. The robot has a facial expression function using a curved surface display. We examined the effect of emotional expression to the gaze guidance in terms of the reliability and the reaction speed. We conducted the experiments of gaze measurement during the gaze guidance with emotional expression to evaluate the role of emotional expression. The results of the gaze guidance experiments showed that the gaze guidance with emotional expression caused more reliable and quick eye movement than that without emotional expression. Specially, the expression of surprise has the best performance in the gaze guidance among the six basic emotions.

[1]  Cynthia Breazeal,et al.  Designing sociable robots , 2002 .

[2]  Minoru Hashimoto,et al.  Facial expression of a robot using a curved surface display , 2005, 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[3]  S. Yoshida,et al.  Gaze Guidance with Emotional Presentation of a Head Robot , 2007, 2007 IEEE/ICME International Conference on Complex Medical Engineering.

[4]  Paolo Dario,et al.  Effective emotional expressions with expression humanoid robot WE-4RII: integration of humanoid robot hand RCH-1 , 2004, 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566).

[5]  Yuichiro Yoshikawa,et al.  The effects of responsive eye movement and blinking behavior in a communication robot , 2006, 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[6]  Minoru Asada,et al.  Emergence of Joint Attention through Bootstrap Learning based on the Mechanisms of Visual Attention and Learning with Self-evaluation , 2004 .