Evaluation of Unimodal and Multimodal Communication Cues for Attracting Attention in Human–Robot Interaction

One of the most common tasks of a robot companion in the home is communication. In order to initiate an information exchange with its human partner, the robot needs to attract the attention of the human. This paper presents results of two user studies ($$\mathrm{N}=12$$N=12) to evaluate the effectiveness of unimodal and multimodal communication cues for attracting attention. Results showed that unimodal communication cues which involve sound generate the fastest reaction times. Contrary to expectations, multimodal communication cues resulted in longer reaction times with respect to the unimodal communication cue that produced the shortest reaction time.

[1]  M. Wallace,et al.  Integration of multiple sensory modalities in cat cortex , 2004, Experimental Brain Research.

[2]  N. Bolognini,et al.  Enhancement of visual perception by crossmodal visuo-auditory interaction , 2002, Experimental Brain Research.

[3]  A. Diederich Intersensory facilitation of reaction time: evaluation of counter and diffusion coactivation models , 1995 .

[4]  Lawrence E Marks,et al.  Brighter noise: Sensory enhancement of perceived loudness by concurrent visual stimulation , 2004, Cognitive, affective & behavioral neuroscience.

[5]  Yoshinori Kobayashi,et al.  Controlling human attention through robot's gaze behaviors , 2011, 2011 4th International Conference on Human System Interactions, HSI 2011.

[6]  Takayuki Kanda,et al.  Footing in human-robot conversations: How robots might shape participant roles using gaze cues , 2009, 2009 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[7]  Thierry Chaminade,et al.  Comparing the effect of humanoid and human face for the spatial orientation of attention , 2013, Front. Neurorobot..

[8]  Jeff Miller,et al.  Divided attention: Evidence for coactivation with redundant signals , 1982, Cognitive Psychology.

[9]  Gernot A. Fink,et al.  Focusing computational visual attention in multi-modal human-robot interaction , 2010, ICMI-MLMI '10.

[10]  Minoru Asada,et al.  A constructive model for the development of joint attention , 2003, Connect. Sci..

[11]  Illah R. Nourbakhsh,et al.  The role of expressiveness and attention in human-robot interaction , 2002, Proceedings 2002 IEEE International Conference on Robotics and Automation (Cat. No.02CH37292).

[12]  Stefan Kopp,et al.  Effects of Gesture on the Perception of Psychological Anthropomorphism: A Case Study with a Humanoid Robot , 2011, ICSR.

[13]  Yoshinori Kobayashi,et al.  An intelligent human-robot interaction framework to control the human attention , 2013, 2013 International Conference on Informatics, Electronics and Vision (ICIEV).

[14]  Chrystopher L. Nehaniv,et al.  Hey, I'm over here - How can a robot attract people's attention? , 2005, ROMAN 2005. IEEE International Workshop on Robot and Human Interactive Communication, 2005..

[15]  M. Crocker,et al.  Investigating joint attention mechanisms through spoken human–robot interaction , 2011, Cognition.

[16]  Elena Torta,et al.  How Can a Robot Attract the Attention of Its Human Partner? A Comparative Study over Different Modalities for Attracting Attention , 2012, ICSR.

[17]  B. Stein,et al.  The Merging of the Senses , 1993 .

[18]  Takayuki Kanda,et al.  Humanlike conversation with gestures and verbal cues based on a three-layer attention-drawing model , 2006, Connect. Sci..

[19]  Maja J. Mataric,et al.  Investigating the effects of visual saliency on deictic gesture production by a humanoid robot , 2011, 2011 RO-MAN.