Impressions made by blinking light used to create artificial subtle expressions and by robot appearance in human-robot speech interaction

The impressions made by a blinking light used to create artificial subtle expressions (ASEs) and by a robot's appearance on users were investigated. The blinking light, which shows the user that the robot is performing speech recognition and thereby prevents utterance collisions, was separated from the robot by embedding it in a pedestal unit. In an evaluation experiment, participants performed five tasks with a spoken dialogue system coupled to a robot placed on the pedestal. The participants' impressions of the dialogue interactions and of the robot were obtained under four conditions (w/ light blinking or w/o blinking; humanoid or cuboid robot). The cuboid robot created a stronger impression of comfort and excitement for the interactions while the blinking light did not create a strong impression of anything. The robot's appearance and the blinking did not create a strong impression of anything for the robot. This suggests that the blinking light in the pedestal unit is a factor that is independent of robot appearance, meaning that the pedestal unit can be applied to robots with various appearances.

[1]  Philip R. Cohen,et al.  Intentions in Communication. , 1992 .

[2]  Seiji Yamada,et al.  Smoothing human-robot speech interaction with blinking-light expressions , 2008, RO-MAN 2008 - The 17th IEEE International Symposium on Robot and Human Interactive Communication.

[3]  Seiji Yamada,et al.  Smoothing human-robot speech interactions by using a blinking-light as subtle expression , 2008, ICMI '08.

[4]  Michael Kipp,et al.  IGaze: Studying Reactive Gaze Behavior in Semi-immersive Human-Avatar Interactions , 2008, IVA.

[5]  Aladdin Ayesh,et al.  Extracting subtle facial expression for emotional analysis , 2004, 2004 IEEE International Conference on Systems, Man and Cybernetics (IEEE Cat. No.04CH37583).

[6]  船越 孝太郎,et al.  対話の低速化とArtificial Subtle Expressionによる発話衝突の抑制 , 2009 .

[7]  Nigel Ward,et al.  On the Expressive Competencies Needed for Responsive Systems , 2003 .

[8]  Seiji Yamada,et al.  Reducing Speech Collisions by Using an Artificial Subtle Expression in a Decelerated Spoken Dialogue , 2010 .

[9]  Philip R. Cohen,et al.  Intentions in Communication , 1991, CL.

[10]  Seiji Yamada,et al.  Blinking light patterns as artificial subtle expressions in human-robot speech interaction , 2011, 2011 RO-MAN.

[11]  Seiji Yamada,et al.  Effects of different types of artifacts on interpretations of artificial subtle expressions (ASEs) , 2011, CHI EA '11.

[12]  Rosalind W. Picard,et al.  Subtle Expressivity in a Robotic Computer , 2003 .

[13]  A. Kendon Do Gestures Communicate? A Review , 1994 .

[14]  Chun Chen,et al.  Subtle Facial Expression Modeling with Vector Field Decomposition , 2006, 2006 International Conference on Image Processing.

[15]  Seiji Yamada,et al.  How appearance of robotic agents affects how people interpret the agents' attitudes , 2007, ACE '07.

[16]  Seiji Yamada,et al.  Interpretations of Artificial Subtle Expressions (ASEs) in Terms of Different Types of Artifact: A Comparison of an on-screen Artifact with A Robot , 2011, ACII.

[17]  Takayuki Kanda,et al.  Humanlike conversation with gestures and verbal cues based on a three-layer attention-drawing model , 2006, Connect. Sci..

[18]  Seiji Yamada,et al.  Non-humanlike Spoken Dialogue: A Design Perspective , 2010, SIGDIAL Conference.

[19]  Seiji Yamada,et al.  How Does the Difference Between Users’ Expectations and Perceptions About a Robotic Agent Affect Their Behavior? , 2011, International Journal of Social Robotics.

[20]  Christoph Bartneck,et al.  Subtle emotional expressions of synthetic characters , 2005, Int. J. Hum. Comput. Stud..