Emotive qualities in lip-synchronized robot speech

This paper explores the expression of emotion in synthesized speech for an anthropomorphic robot. We have adapted several key emotional correlates of human speech to the robot's speech synthesizer to allow the robot to speak in either an angry, calm, disgusted, fearful, happy, sad or surprised manner. We have evaluated our approach thorough acoustic analysis of the speech patterns for each vocal affect and have studied how well human subjects perceive the intended affect. The robot lip synchronizes in real-time to enhance the delivery of its expressive utterances.