Robot Form and Motion Influences Social Attention

For social robots to be successful, they need to be accepted by humans. Human-robot interaction (HRI) researchers are aware of the need to develop the right kinds of robots with appropriate, natural ways for them to interact with humans. However, much of human perception and cognition occurs outside of conscious awareness, and how robotic agents engage these processes is currently unknown. Here, we explored automatic, reflexive social attention, which operates outside of conscious control within a fraction of a second to discover whether and how these processes generalize to agents with varying humanlikeness in their form and motion. Using a social variant of a well-established spatial attention paradigm, we tested whether robotic or human appearance and/or motion influenced an agent’s ability to capture or direct implicit social attention. In each trial, either images or videos of agents looking to one side of space (a head turn) were presented to human observers. We measured reaction time to a peripheral target as an index of attentional capture and direction. We found that all agents, regardless of humanlike form or motion, were able to direct spatial attention in the cued direction. However, differences in the form of the agent affected attentional capture, i.e., how quickly the observers could disengage attention from the agent and respond to the target. This effect further interacted with whether the spatial cue (head turn) was presented through static images or videos. Overall whereas reflexive social attention operated in the same manner for human and robot social agents for spatial attentional cueing, robotic appearance, as well as whether the agent was static or moving significantly influenced unconscious attentional capture processes. These studies reveal how unconscious social attentional processes operate when the agent is a human vs. a robot, add novel manipulations to the literature such as the role of visual motion, and provide a link between attention studies in HRI, and decades of research on unconscious social attention in experimental psychology and vision science.Categories and Subject Descriptors H.1.2 [Models and Principles]: User/Machine Systems –Human factors. H.5.2 [Information Interfaces and Presentation]: User Interfaces – Evaluation/methodology, User-Centered DesignGeneral TermsDesign, Human Factors.

[1]  S. Tipper,et al.  Gaze cueing of attention: visual attention, social cognition, and individual differences. , 2007, Psychological bulletin.

[2]  Lawrie S. McKay,et al.  Empirical evaluation of the uncanny valley hypothesis fails to confirm the predicted effect of motion , 2014, Cognition.

[3]  Takayuki Kanda,et al.  Footing in human-robot conversations: How robots might shape participant roles using gaze cues , 2009, 2009 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[4]  H. Ishiguro,et al.  The thing that should not be: predictive coding and the uncanny valley in perceiving human and humanoid robot actions , 2011, Social cognitive and affective neuroscience.

[5]  J. Trafton,et al.  The Perception of Humanness from the Movements of Synthetic Agents , 2011, Perception.

[6]  J. Hietanen Does your gaze direction and head orientation shift my visual attention? , 1999, Neuroreport.

[7]  A. Kingstone,et al.  The eyes have it! Reflexive orienting is triggered by nonpredictive gaze , 1998 .

[8]  C. Moore,et al.  The role of movement in the development of joint visual attention , 1997 .

[9]  Jun'ichiro Seyama,et al.  The Uncanny Valley: Effect of Realism on the Impression of Artificial Human Faces , 2007, PRESENCE: Teleoperators and Virtual Environments.

[10]  B. Bertenthal,et al.  Dynamic pointing triggers shifts of visual attention in young infants. , 2012, Developmental science.

[11]  Derrick J. Parkhurst,et al.  Modeling the role of salience in the allocation of overt visual attention , 2002, Vision Research.

[12]  Frank E. Pollick,et al.  Motion and the uncanny valley , 2007 .

[13]  J. Hietanen,et al.  Social attention orienting integrates visual information from head and body orientation , 2002, Psychological research.

[14]  Heloir,et al.  The Uncanny Valley , 2019, The Animation Studies Reader.

[15]  Karl F. MacDorman,et al.  The Uncanny Valley [From the Field] , 2012, IEEE Robotics Autom. Mag..

[16]  S. Baron-Cohen,et al.  Gaze Perception Triggers Reflexive Visuospatial Orienting , 1999 .

[17]  Andrew Olney,et al.  Upending the Uncanny Valley , 2005, AAAI.

[18]  Shawn E. Christ,et al.  Motion onset captures attention: A rejoinder to Franconeri and Simons (2005) , 2006, Perception & psychophysics.

[19]  Marcus Cheetham,et al.  Category Processing and the human likeness dimension of the Uncanny Valley Hypothesis: Eye-Tracking Data , 2013, Front. Psychol..

[20]  H. Ishiguro,et al.  Assessing Human Likeness by Eye Contact in an Android Testbed , 2005 .

[21]  Astrid Weiss,et al.  Don't stand so close to me: Users' attitudinal and behavioral responses to personal space invasion by robots , 2012, 2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[22]  Dana Kulic,et al.  Measurement Instruments for the Anthropomorphism, Animacy, Likeability, Perceived Intelligence, and Perceived Safety of Robots , 2009, Int. J. Soc. Robotics.

[23]  T. Kanda,et al.  Analysis of Humanoid Appearances in Human-robot Interaction , 2006 .

[24]  Mark H. Johnson,et al.  Gaze Following in Newborns , 2004 .

[25]  J. Hietanen,et al.  Does facial expression affect attention orienting by gaze direction cues? , 2003, Journal of experimental psychology. Human perception and performance.

[26]  V. Bruce,et al.  You must see the point: automatic processing of cues to the direction of social attention. , 2000, Journal of experimental psychology. Human perception and performance.

[27]  Yutaka Nakamura,et al.  The Psychological Effects of Attendance of an Android on Communication , 2008, ISER.

[28]  Takayuki Kanda,et al.  Nonverbal leakage in robots: Communication of intentions through seemingly unintentional behavior , 2009, 2009 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[29]  Susanne Quadflieg,et al.  The owl and the pussycat: Gaze cues and visuospatial orienting , 2004, Psychonomic bulletin & review.

[30]  Mark Grimshaw,et al.  Uncanny behaviour in survival horror games , 2010 .

[31]  Anna S. Law,et al.  Attention capture by faces , 2008, Cognition.

[32]  Jun'ichiro Seyama,et al.  Probing the Uncanny Valley with the Eye Size Aftereffect , 2009, PRESENCE: Teleoperators and Virtual Environments.

[33]  Minoru Asada,et al.  Human-Inspired Robots , 2006, IEEE Intelligent Systems.

[34]  M. Posner,et al.  Orienting of Attention* , 1980, The Quarterly journal of experimental psychology.

[35]  Brian Scassellati,et al.  Robot gaze does not reflexively cue human attention , 2011, CogSci.

[36]  E. Wiese,et al.  What We Observe Is Biased by What Other People Tell Us: Beliefs about the Reliability of Gaze Behavior Modulate Attentional Orienting to Gaze Cues , 2014, PloS one.

[37]  H. Ishiguro,et al.  The uncanny advantage of using androids in cognitive and social science research , 2006 .

[38]  Thierry Chaminade,et al.  Comparing the effect of humanoid and human face for the spatial orientation of attention , 2013, Front. Neurorobot..

[39]  Hiroshi Ishiguro,et al.  EEG theta and Mu oscillations during perception of human and robot actions , 2013, Front. Neurorobot..

[40]  Barry Spinner,et al.  Subjects' Access to Cognitive Processes: Demand Characteristics and Verbal Report , 1981 .

[41]  S. Langton The Mutual Influence of Gaze and Head Orientation in the Analysis of Social Attention Direction , 2000, The Quarterly journal of experimental psychology. A, Human experimental psychology.

[42]  Cynthia Breazeal,et al.  Toward sociable robots , 2003, Robotics Auton. Syst..

[43]  Bilge Mutlu,et al.  A Storytelling Robot: Modeling and Evaluation of Human-like Gaze Behavior , 2006, 2006 6th IEEE-RAS International Conference on Humanoid Robots.

[44]  Takashi Minato,et al.  Evaluating the human likeness of an android by comparing gaze behaviors elicited by the android and a person , 2006, Adv. Robotics.

[45]  L. Jäncke,et al.  Human Neuroscience , 2022 .

[46]  A. Kingstone,et al.  Human social attention. , 2009, Progress in brain research.

[47]  Jan Zwickel,et al.  I See What You Mean: How Attentional Selection Is Shaped by Ascribing Intentions to Others , 2012, PloS one.

[48]  S. Dehaene,et al.  Unconscious Masked Priming Depends on Temporal Attention , 2002, Psychological science.