Proposing Artificial Subtle Expressions as an Intuitive Notification Methodology for Artificial Agents’ Internal States Takanori Komatsu (tkomat@shinshu-u.ac.jp) International Young Researcher Empowerment Center, Shinshu University, 3-15-1 Tokida, Ueda 386-8567, Japan Seiji Yamada (seiji@nii.ac.jp) National Institute of Informatics/ SOKEDAI, 2-1-2 Hitotsubashi, Tokyo 101-8430, Japan Kazuki Kobayashi (kby@cs.shinshu-u.ac.jp) Graduate School of Science and Technology, Shinshu University 4-17-1 Wakasato, Nagano 380-8553, Japan Kotaro Funakoshi (funakoshi@jp.honda-ri.com) and Mikio Nakano (nakano@jp.honda-ri.com) Honda Research Institute Japan Co., Ltd, 8-1 Honcho, Wako 351-0188, Japan Ward (2003) reported that the subtle flections of the pitch information in speech sounds reflect one’s emotional states even when contradicted by the literal meanings of the speech sounds, and Cowell & Ayesh (2004) offered a similar argument in terms of facial expressions. It is therefore believed that such subtle expressions can be utilized to help humans easily understand an artifact’s internal state because humans can intuitively understand such subtle expressions. For example, Sugiyama et al. (2006) developed a humanoid robot that can express appropriate gestures based on a recognition of its situation, and Kipp & Gebhard (2008) developed a human-like avatar agent that can control its gaze direction according to the user’s gaze direction. However, since these researchers tried to implement subtle expressions on artifacts (e.g., humanoid robots or dexterous avatar agents), it resulted in considerably high implementation costs. In contrast to the above approaches, Yamada & Komatsu (2006) and Komatsu & Yamada (2007) reported that simple beeping sounds from a robot with decreasing/increasing frequency enabled humans to interpret the robot’s negative/positive states. Funakoshi et al. (2008) also reported that the robot’s blinking LED could convey to users a robot’s internal state (processing or busy) for the sake of reducing the occurrence of speech collisions during their verbal conversations. It then seemed that such simple expressions (beeping sounds or blinking LEDs) from artifacts could play a similar role to the subtle expressions of humans, so we named these expressions in artifacts “Artificial Subtle Expressions (ASEs),” referring to artifacts’ simple and low-cost expressions that enable humans to estimate the artifacts’ internal state accurately and intuitively. We stipulate that the ASEs should Abstract We describe artificial subtle expressions (ASEs) as an intuitive notification methodology for artifacts’ internal states for users. We prepared two types of audio ASEs: one was a flat artificial sound (flat ASE), and the other was a sound that decreased in pitch (decreasing ASE). These two ASEs were played after a robot made a suggestion to the users. Specifically, we expected that the decreasing ASE would inform users of the robot’s lower level of confidence in its suggestion. We then conducted a simple experiment to observe whether the participants accepted or rejected the robot’s suggestion based on the ASEs. The results showed that they accepted the robot’s suggestion when the flat ASE was used, whereas they rejected it when the decreasing ASE was used. We thereby concluded that the ASEs succeeded in conveying the robot’s internal state to users accurately and intuitively. Keywords: Artificial subtle expressions Complementary; Intuitive; Simple; Accurate. (ASEs); Introduction Although human communications are explicitly achieved through verbal utterances, paralinguistic information (e.g., pitch and power of utterances) and nonverbal information (e.g., facial expressions, gaze direction, and gestures) also play important roles (Kendon, 1994). This is because one’s internal state is deeply reflected in one’s paralinguistic and nonverbal information. In other words, other people can intuitively and easily understand a person’s internal state from such information when it is expressed (Cohen et al., 1990). Recently, some researchers have reported that very small changes in the expression of such information, called subtle expressions (Liu & Picard, 2003), significantly influence human communications, especially in the conveyance of one’s internal state to others. For example,
[1]
Michael Kipp,et al.
IGaze: Studying Reactive Gaze Behavior in Semi-immersive Human-Avatar Interactions
,
2008,
IVA.
[2]
A. Kendon.
Do Gestures Communicate? A Review
,
1994
.
[3]
Seiji Yamada,et al.
How do robotic agents' appearances affect people's interpretations of the agents' attitudes?
,
2007,
CHI Extended Abstracts.
[4]
Seiji Yamada,et al.
Smoothing human-robot speech interactions by using a blinking-light as subtle expression
,
2008,
ICMI '08.
[5]
William W. Gaver.
Auditory Icons: Using Sound in Computer Interfaces
,
1986,
Hum. Comput. Interact..
[6]
William W. Gaver.
The SonicFinder: An Interface That Uses Auditory Icons
,
1989,
Hum. Comput. Interact..
[7]
Meera Blattner,et al.
Earcons and Icons: Their Structure and Common Design Principles
,
1989,
Hum. Comput. Interact..
[8]
Rosalind W. Picard,et al.
Subtle Expressivity in a Robotic Computer
,
2003
.
[9]
Seiji Yamada,et al.
Designing simple and effective expression of robot's primitive minds to a human
,
2006,
2006 IEEE/RSJ International Conference on Intelligent Robots and Systems.
[10]
Philip R. Cohen,et al.
Intentions in Communication.
,
1992
.
[11]
Takayuki Kanda,et al.
Humanlike conversation with gestures and verbal cues based on a three-layer attention-drawing model
,
2006,
Connect. Sci..
[12]
Philip R. Cohen,et al.
Intentions in Communication
,
1991,
CL.
[13]
Nigel Ward,et al.
On the Expressive Competencies Needed for Responsive Systems
,
2003
.