An affective interactive audio interface for Lovotics

The aim of the "Lovotics" (Love + Robotics) research is to accomplish an amicable relationship between humans and robots to create a sentimental robotic system that is engaged in a reciprocal affective interaction with humans. This article will outline the part of the project focus which is to develop an affective audio system for Lovotics which will act as an active participant in a bidirectional nonverbal communication process with humans. This interactive audio system is capable of synthesizing real-time audio output based on eight parameters, namely, pitch, number of harmonics, amplitude, tempo, sound envelope, chronemics, proximity, and synchrony. In addition to the first five common parameters, we focused on comprehensive research and user testing on the chronemics, proximity, and synchrony (C.P.S. effect) and aimed to find out how these three factors enhance positive feelings in the human-robot interaction. These factors were determined through our study and were found to have a positive effect on the emotional interaction between humans and robots. Thus, an interactive feedback audio system is implemented which allows sentimental interaction between humans and robots. The aim of such a system is to offer new possibilities for exploring the concept and possibilities of human-robot love.

[1]  Michael A. Goodrich,et al.  Human-Robot Interaction: A Survey , 2008, Found. Trends Hum. Comput. Interact..

[2]  Kory Floyd,et al.  All Touches are not Created Equal: Effects of Form and Duration on Observers' Interpretations of an Embrace , 1999 .

[3]  Chin-Hui Lee,et al.  A maximum-likelihood approach to stochastic matching for robust speech recognition , 1996, IEEE Trans. Speech Audio Process..

[4]  Zhigang Deng,et al.  Analysis of emotion recognition using facial expressions, speech and multimodal information , 2004, ICMI '04.

[5]  Andrés Faiña,et al.  An adaptive detection/attention mechanism for real time robot operation , 2009, Neurocomputing.

[6]  António J. S. Teixeira,et al.  Human-robot interaction through spoken language dialogue , 2000, Proceedings. 2000 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2000) (Cat. No.00CH37113).

[7]  Joan S. Tucker,et al.  Adult Attachment Style and Nonverbal Closeness in Dating Couples , 1998 .

[8]  Renee van Bezooyen Characteristics and recognizability of vocal expressions of emotion , 1984 .

[9]  R. Heslin,et al.  Hands touching hands: affective and evaluative effects of an interpersonal touch. , 1976, Sociometry.

[10]  Qi Tian,et al.  HMM-Based Audio Keyword Generation , 2004, PCM.

[11]  Adrian David Cheok,et al.  Towards a formulation of love in human - robot interaction , 2010, 19th International Symposium in Robot and Human Interactive Communication.

[12]  Lisa Feldman Barrett,et al.  I Like the Sound of Your Voice: Affective Learning about Vocal Signals. , 2010, Journal of experimental social psychology.

[13]  Cynthia Breazeal,et al.  Emotive qualities in lip-synchronized robot speech , 2003, Adv. Robotics.

[14]  R. Baaren,et al.  Mimicry for money: Behavioral consequences of imitation , 2003 .

[15]  T. Chartrand,et al.  THE UNBEARABLE AUTOMATICITY OF BEING , 1999 .

[16]  Klaus R. Scherer,et al.  Vocal communication of emotion: A review of research paradigms , 2003, Speech Commun..

[17]  C. Macrae,et al.  A case of hand waving: Action synchrony and person perception , 2008, Cognition.

[18]  Adrian David Cheok,et al.  Probability of love between robots and humans , 2010, 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[19]  Cynthia Breazeal,et al.  Recognition of Affective Communicative Intent in Robot-Directed Speech , 2002, Auton. Robots.

[20]  Roddy Cowie,et al.  Automatic recognition of emotion from voice: a rough benchmark , 2000 .

[21]  J. Bowlby Attachment and loss: retrospect and prospect. , 1969, The American journal of orthopsychiatry.

[22]  K. Scherer Expression of emotion in voice and music. , 1995, Journal of voice : official journal of the Voice Foundation.

[23]  K. Scherer,et al.  Acoustic profiles in vocal emotion expression. , 1996, Journal of personality and social psychology.

[24]  Hiroaki Kitano,et al.  Human-robot interaction through real-time auditory and visual multiple-talker tracking , 2001, Proceedings 2001 IEEE/RSJ International Conference on Intelligent Robots and Systems. Expanding the Societal Role of Robotics in the the Next Millennium (Cat. No.01CH37180).

[25]  Vladimir A. Kulyukin Human-Robot Interaction Through Gesture-Free Spoken Dialogue , 2004, Auton. Robots.

[26]  K. Scherer,et al.  Cue utilization in emotion attribution from auditory stimuli , 1977 .

[27]  Rob W. Holland,et al.  Mimicry for money : Behavioral consequences of imitation q , 2003 .

[28]  Nikos Fakotakis,et al.  Comparative Evaluation of Various MFCC Implementations on the Speaker Verification Task , 2007 .

[29]  Arvin Agah,et al.  Human Robot Interaction Through Semantic Integration of Multiple Modalities, Dialog Management, and Contexts , 2009, Int. J. Soc. Robotics.

[30]  Phillip R. Shaver,et al.  Patterns of Nonverbal Behavior and Sensivity in the Context of Attachment Relations , 2005 .

[31]  Arvin Agah,et al.  Human interactions with intelligent systems: research taxonomy , 2000, Comput. Electr. Eng..

[32]  Paul Boersma,et al.  Praat, a system for doing phonetics by computer , 2002 .

[33]  Jeffrey Tzu Kwan Valino Koh,et al.  A Design Process for Lovotics , 2010, HRPR.