Social Reflex Hypothesis on Blinking Interaction Yuichiro Yoshikawa (yoshikawa@jeap.org) Asada Synargistic Intelligence Project, Erato, JST 2-1 Yamadaoka, Suita, Osaka, 565-0871 Kazuhiko Shinozawa (shino@atr.jp) 2-2-2 Hikaridai, Keihana Science city, Kyoto, Japan Hiroshi Ishiguro (ishiguro@ams.eng.osaka-u.ac.jp) Asada Synargistic Intelligence Project, Erato, JST Dept. of Adaptive Machine Systems, Graduate Shool of Engineering, Osaka Univ. 2-1 Yamadaoka, Suita, Osaka, 565-0871 Japan Abstract animate appearance has been suggested to be a necessary as- pect for infants to attribute intentions to the other (Melzoff, 1995). On the other hand, experiments using non-animate agents without animate appearance but with some behavioral aspects have been conducted to reveal the effect of these aspects. Through the experiments, behavioral aspects such as rationality (Gergely & Csibra, 2003), self-propelledness (Luo & Baillargeon, 2005), and interactiveness or contin- gency (Shimizu & Johnson, 2004) have been suggested to also be necessary for infants to attribute goals or intentions to the agents. Experiments using an agent with more ani- mate appearance, such as a humanoid robot, have revealed that infants expect communicability of an interactive robot (Arita, Hiraki, Kanda, & Ishiguro, 2005). Furthermore, such studies using an agent with both an animate appearance and controllable, behavioral aspects have been expected to reveal their synergistic effects (Johnson, Slaughter, & Carey, 1998; Kamewari, Kato, Kanda, Ishiguro, & Hiraki, 2005). However, in previous experiments, the participants usually observe the agents only from the third person’s viewpoint. In other words, there have still been few studies directly analyz- ing how we recognize others through actively interacting with them. This might be caused by difficulties in building a suf- ficiently interactive agent. However, as appeared in a study using an android (Ishiguro & Minato, 2005), the recent de- velopment of technologies allows us to provide an interactive agent with an anthropomorphic appearance and powerful ca- pabilities of sensing subtle human behaviors, with which it is expected to be sufficiently interactive to induce a natural re- sponse from humans. In other words, now we have artificial, controllable “humans” with which we can design experiments on the issue of the nature of the human cognitive mechanism necessary for an interactant’s response. Some previous studies with interactive robots have demon- strated that humans are sensitive to the interactant’s response. Watanabe et al. have suggested that a responsive robot’s nod- ding to a participant’s voice could lead her to engage in con- versation with it (Watanabe, Danbara, & Okubo, 2003). Not only the responses to voice but also to nonverbal signals such as mimicry of the partner’s movement have been shown to make participants regard the mimicking robot highly (Bailen- An interactive artificial agent is supposed to be a feasible tool to reveal how humans recognize and respond to another’s re- sponse and should be developed through the investigation of the human cognitive mechanism. Previous studies have sug- gested that humans are sensitive to the responses of the inter- actant and might relate them to the interactant’s communica- tiveness. Following on from the findings of the previous stud- ies, this paper presents the social reflex hypothesis on nonver- bal responsive interaction, which refers to the social effect and origins of one’s unconscious response to the nonverbal behav- iors of the interactant. To investigate the hypothesis, we con- ducted an experiment evaluating participants’ impression on an on-screen agent that could blink in response to the partici- pant’s blinking. We found there was a non-linear relationship between the response latencies and the participants’ feeling of being looked at, which accorded with the hypothesis. The im- plications of the result and the further work on other aspects of the hypothesis were discussed. Keywords: Social reflex hypothesis; response latency; feeling of being looked at; interactive agent. Introduction An interactive artificial agent, including a communication robot and on-screen agents have been widely focused on as potential devices for an intuitive interface for humans (Kanda, Ishiguro, Imai, & Ono, 2004) and a therapeutic tool for com- munication disorders (Robins, Dickerson, Stribling, & Daut- enhahn, 2004). To design an agent that responds to humans, it is necessary to know how humans recognize and respond to an agent’s response. However, it is difficult to control the experimental conditions for revealing human cognitive mech- anisms to the interactant since we can not completely control confederate humans. For example, it is difficult to inhibit and promote one’s unconscious responses, which might form im- portant aspects of how we recognize the other. Therefore, in- teractive artificial agents have been focused on as controllable “humans” in the experiment to complementarily approach the cognitive mechanism of a communication partner. Experiments with artificial agents or objects without an- imate appearances have been extensively conducted to re- veal how infants regard someone or something as being with important attributes for a communicative existence. From the experiment to compare an infant’s re-enactment of the demonstration by a human and a mechanical manipulator, an
[1]
A. Meltzoff.
Understanding the Intentions of Others: Re-Enactment of Intended Acts by 18-Month-Old Children.
,
1995,
Developmental psychology.
[2]
S. Carey,et al.
Whose gaze will infants follow? The elicitation of gaze-following in 12-month-olds
,
1998
.
[3]
G. Csibra,et al.
Teleological reasoning in infancy: the naı̈ve theory of rational action
,
2003,
Trends in Cognitive Sciences.
[4]
Tomio Watanabe,et al.
Effects of a speech-driven embodied interactive actor "InterActor" on talker's speech characteristics
,
2003,
The 12th IEEE International Workshop on Robot and Human Interactive Communication, 2003. Proceedings. ROMAN 2003..
[5]
Susan C. Johnson,et al.
Infants' attribution of a goal to a morphologically unfamiliar agent.
,
2004,
Developmental science.
[6]
Tetsuo Ono,et al.
Development and evaluation of interactive humanoid robots
,
2004,
Proceedings of the IEEE.
[7]
B. Robins,et al.
Robot-mediated joint attention in children with autism : A case study in robot-human interaction
,
2004
.
[8]
J. Bailenson,et al.
Digital Chameleons
,
2005,
Psychological science.
[9]
T. Kanda,et al.
Six-and-a-half-month-old children positively attribute goals to human action and to humanoid-robot motion
,
2005
.
[10]
R. Baillargeon,et al.
Can a Self-Propelled Box Have a Goal?
,
2005,
Psychological science.
[11]
T. Kanda,et al.
Can we talk to robots? Ten-month-old infants expected interactive humanoid robots to be talked to by persons
,
2005,
Cognition.
[12]
H. Ishiguro.
- 1-Development of androids for studying on human-robot interaction
,
2005
.
[13]
Yuichiro Yoshikawa,et al.
Responsive Robot Gaze to Interaction Partner
,
2006,
Robotics: Science and Systems.
[14]
Takayuki Kanda,et al.
How contingent should a communication robot be?
,
2006,
HRI '06.
[15]
Daniel Stahl,et al.
Sensitivity to interpersonal timing at 3 and 6 months of age
,
2006
.
[16]
Philippe Rochat,et al.
Origins of Self‐concept
,
2007
.