Acoustical implicit communication in human-robot interaction

Explicit communication addresses the use of distinct language or protocol to convey the idea. Implicit communication helps to compensate many hidden meanings omitted from the explicit language. In some situations, implicit communication may even take the place of explicit communication. For the autonomous robot, implicit communication provides an alternative way to interact with people. This paper introduces the acoustic techniques for implicit communication in human-robot interaction, and the design of acoustical implicit communication based robot games.

[1]  Sebastian Thrun,et al.  A Gesture Based Interface for Human-Robot Interaction , 2000, Auton. Robots.

[2]  Douglas A. Reynolds,et al.  Robust text-independent speaker identification using Gaussian mixture speaker models , 1995, IEEE Trans. Speech Audio Process..

[3]  A. Newell,et al.  A Study of Conversational Turn-Taking in a Communication Aid for the Deaf , 1991 .

[4]  Jean Rouat,et al.  Robust sound source localization using a microphone array on a mobile robot , 2003, Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (Cat. No.03CH37453).

[5]  Björn W. Schuller,et al.  Speaker Independent Speech Emotion Recognition by Ensemble Classification , 2005, 2005 IEEE International Conference on Multimedia and Expo.

[6]  Justine Cassell,et al.  Embodied Conversational Agents: Representation and Intelligence in User Interfaces , 2001, AI Mag..

[7]  Kerstin Dautenhahn,et al.  Methodology & Themes of Human-Robot Interaction: A Growing Research Field , 2007 .

[8]  Sumit Basu,et al.  Conversational scene analysis , 2002 .

[9]  E. Vesterinen,et al.  Affective Computing , 2009, Encyclopedia of Biometrics.

[10]  J. Cassell,et al.  Embodied conversational agents , 2000 .

[11]  Tomi Kinnunen,et al.  Optimizing Spectral Feature Based Text-Independent Speaker Recognition , 2005 .

[12]  A.A. Razak,et al.  Towards automatic recognition of emotion in speech , 2003, Proceedings of the 3rd IEEE International Symposium on Signal Processing and Information Technology (IEEE Cat. No.03EX795).

[13]  Shrikanth S. Narayanan,et al.  Analyzing Children's Speech: An Acoustic Study of Consonants and Consonant-Vowel Transition , 2006, 2006 IEEE International Conference on Acoustics Speech and Signal Processing Proceedings.

[14]  Illah R. Nourbakhsh,et al.  A survey of socially interactive robots , 2003, Robotics Auton. Syst..

[15]  Shrikanth S. Narayanan,et al.  Primitives-based evaluation and estimation of emotions in speech , 2007, Speech Commun..

[16]  Michael J. Carey,et al.  Robust prosodic features for speaker identification , 1996, Proceeding of Fourth International Conference on Spoken Language Processing. ICSLP '96.

[17]  Johannes Wagner,et al.  Automatic Recognition of Emotions from Speech: A Review of the Literature and Recommendations for Practical Realisation , 2008, Affect and Emotion in Human-Computer Interaction.

[18]  Seong-Whan Lee Automatic gesture recognition for intelligent human-robot interaction , 2006, 7th International Conference on Automatic Face and Gesture Recognition (FGR06).

[19]  Justine Cassell,et al.  Literacy learning by storytelling with a virtual peer , 2002, CSCL.