I saw it on YouTube! How online videos shape perceptions of mind, morality, and fears about robots

Robots have the potential to transform our existing categorical distinctions between “property” and “persons.” Previous research has demonstrated that humans naturally anthropomorphize them, and this tendency may be amplified when a robot is subject to abuse. Simultaneously, robots give rise to hopes and fears about the future and our place in it. However, most available evidence on these mechanisms is either anecdotal, or based on a small number of laboratory studies with limited ecological validity. The present work aims to bridge this gap through examining responses of participants (N = 160) to four popular online videos of a leading robotics company (Boston Dynamics) and one more familiar vacuum cleaning robot (Roomba). Our results suggest that unexpectedly human-like abilities might provide more potent cues to mind perception than appearance, whereas appearance may attract more compassion and protection. Exposure to advanced robots significantly influences attitudes toward future artificial intelligence. We discuss the need for more research examining groundbreaking robotics outside the laboratory.

[1]  Cynthia Breazeal,et al.  Designing sociable robots , 2002 .

[2]  C. Nass,et al.  Are People Polite to Computers? Responses to Computer-Based Interviewing Systems1 , 1999 .

[3]  S. Loughnan,et al.  Sexual Objectification Increases Rape Victim Blame and Decreases Perceived Suffering , 2013 .

[4]  E. Broadbent Interactions With Robots: The Truths We Reveal About Ourselves , 2017, Annual review of psychology.

[5]  Helen F. Hastie,et al.  Empathic Robotic Tutors for Personalised Learning: A Multidisciplinary Approach , 2015, ICSR.

[6]  Nicole C. Krämer,et al.  An Experimental Study on Emotional Reactions Towards a Robot , 2012, International Journal of Social Robotics.

[7]  Marc Hassenzahl,et al.  The Uncanny Valley Effect in Zoomorphic Robots: The U-Shaped Relation Between Animal Likeness and Likeability , 2020, 2020 15th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[8]  J. Cacioppo,et al.  On seeing human: a three-factor theory of anthropomorphism. , 2007, Psychological review.

[9]  Edgar Erdfelder,et al.  G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences , 2007, Behavior research methods.

[10]  Márta Gácsi,et al.  On the Utilization of Social Animals as a Model for Social Robotics , 2012, Front. Psychology.

[11]  Kurt Gray,et al.  Mind Perception Is the Essence of Morality , 2012, Psychological inquiry.

[12]  Shaun W. Lawson,et al.  Interaction with a zoomorphic robot that exhibits canid mechanisms of behaviour , 2008, 2008 IEEE International Conference on Robotics and Automation.

[13]  Nicole C. Krämer,et al.  "It doesn't matter what you are!" Explaining social effects of agents and avatars , 2010, Comput. Hum. Behav..

[14]  Karl F. MacDorman,et al.  Too real for comfort? Uncanny responses to computer generated faces , 2009, Comput. Hum. Behav..

[15]  Adrian F. Ward,et al.  The Harm-Made Mind , 2013, Psychological science.

[16]  E. Vanman,et al.  “Danger, Will Robinson!” The challenges of social robots for intergroup relations , 2019, Social and Personality Psychology Compass.

[17]  C. Dann,et al.  Sharenting: Pride, affect and the day‐to‐day politics of digital mothering , 2019, Social and Personality Psychology Compass.

[18]  Nicole C. Krämer,et al.  A survey on robot appearances , 2012, HRI '12.

[19]  Eleanor Sandry,et al.  Robots and Communication , 2015 .

[20]  Michiteru Kitazaki,et al.  Measuring empathy for human and robot hand pain using electroencephalography , 2015, Scientific Reports.

[21]  Youngme Moon Intimate Exchanges: Using Computers to Elicit Self-Disclosure from Consumers , 2000 .

[22]  Dennis Küster,et al.  Avatars in Pain: Visible Harm Enhances Mind Perception in Humans and Robots , 2018, Perception.

[23]  Dylan F. Glas,et al.  Persistence of the Uncanny Valley , 2018 .

[24]  C. Nass,et al.  Machines and Mindlessness , 2000 .

[25]  C. Pelachaud,et al.  Editorial to the special section on misuse and abuse of interactive technologies , 2008 .

[26]  E. Vanman,et al.  The Robots are Coming! The Robots are Coming! Fear and Empathy for Human-like Entities , 2018 .

[27]  M. Canada Misuse and Abuse of Interactive Technologies , 2006 .

[28]  D. Wegner,et al.  More dead than dead: Perceptions of persons in the persistent vegetative state , 2011, Cognition.

[29]  Clifford Nass,et al.  The media equation - how people treat computers, television, and new media like real people and places , 1996 .

[30]  D. Wegner,et al.  Dimensions of Mind Perception , 2007, Science.

[31]  L. Floridi,et al.  The Ethics of Information , 2013, Dialogue.

[32]  Heloir,et al.  The Uncanny Valley , 2019, The Animation Studies Reader.

[33]  E. King,et al.  One Hundred Years of Discrimination Research in the Journal of Applied Psychology: A Sobering Synopsis , 2017, The Journal of applied psychology.

[34]  Thomas Conner,et al.  Robot Rights , 2019, New Media Soc..

[35]  Márta Gácsi,et al.  Should we love robots? - The most liked qualities of companion dogs and how they can be implemented in social robots , 2018, Comput. Hum. Behav..

[36]  D. Wegner,et al.  Feeling robots and human zombies: Mind perception and the uncanny valley , 2012, Cognition.

[37]  Steven J. Stroessner,et al.  The Robotic Social Attributes Scale (RoSAS): Development and Validation , 2017, 2017 12th ACM/IEEE International Conference on Human-Robot Interaction (HRI.

[38]  D. Wegner,et al.  Causes and consequences of mind perception , 2010, Trends in Cognitive Sciences.

[39]  B. J. Fogg,et al.  Computers are social actors: a review of current research , 1997 .

[40]  F. Heider,et al.  An experimental study of apparent behavior , 1944 .

[41]  Roger K. Moore A Bayesian explanation of the ‘Uncanny Valley’ effect and related psychological phenomena , 2012, Scientific Reports.

[42]  Christoph Bartneck,et al.  Anthropomorphism: Opportunities and Challenges in Human–Robot Interaction , 2014, International Journal of Social Robotics.

[43]  Eric Horvitz,et al.  Long-Term Trends in the Public Perception of Artificial Intelligence , 2016, AAAI.

[44]  Kurt Gray,et al.  Moral typecasting: divergent perceptions of moral agents and moral patients. , 2009, Journal of personality and social psychology.

[45]  Antonella De Angeli,et al.  Special issue on the abuse and misuse of social agents , 2008, Interact. Comput..

[46]  Christoph Bartneck,et al.  Use of Praise and Punishment in Human-Robot Collaborative Teams , 2006, ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication.

[47]  Keith S. Jones,et al.  Human-Robot Interaction Toward Usable Personal Service Robots , 2011 .

[48]  Mark Coeckelbergh,et al.  Growing Moral Relations , 2012 .

[49]  Karl F. MacDorman,et al.  The Uncanny Valley [From the Field] , 2012, IEEE Robotics Autom. Mag..

[50]  Patric R. Spence,et al.  Initial Interaction Expectations with Robots: Testing the Human-To-Human Interaction Script , 2016 .

[51]  Antonella De Angeli,et al.  Misuse and abuse of interactive technologies , 2006, CHI Extended Abstracts.

[52]  M. Coeckelbergh,et al.  Growing Moral Relations: Critique of Moral Status Ascription , 2012 .

[53]  Takayuki Kanda,et al.  Do people hold a humanoid robot morally accountable for the harm it causes? , 2012, 2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[54]  C. Bartneck,et al.  Robot abuse : a limitation of the media equation , 2005 .

[55]  Jaime Banks,et al.  Theory of Mind in Social Robots: Replication of Five Established Human Tests , 2020, Int. J. Soc. Robotics.

[56]  Jun Hu,et al.  Exploring the abuse of robots , 2008 .