A perceived moral agency scale: Development and validation of a metric for humans and social machines

Abstract Although current social machine technology cannot fully exhibit the hallmarks of human morality or agency, popular culture representations and emerging technology make it increasingly important to examine human interlocutors’ perception of social machines (e.g., digital assistants, chatbots, robots) as moral agents. To facilitate such scholarship, the notion of perceived moral agency (PMA) is proposed and defined, and a metric developed and validated through two studies: (1) a large-scale online survey featuring potential scale items and concurrent validation metrics for both machine and human targets, and (2) a scale validation study with robots presented as variably agentic and moral. The PMA metric is shown to be reliable, valid, and exhibiting predictive utility.

[1]  Roberta Fadda,et al.  Exploring the Role of Theory of Mind in Moral Judgment: The Case of Children with Autism Spectrum Disorder , 2016, Front. Psychol..

[2]  Philip Pettit,et al.  Agency-Freedom and Option-Freedom , 2003 .

[3]  Glen W. Clatterbuck Attributional Confidence and Uncertainty in Initial Interaction , 1979 .

[4]  Mark Coeckelbergh Drones, Morality, and Vulnerability: Two Arguments Against Automated Killing , 2016 .

[5]  Kerstin Dautenhahn,et al.  Living with Robots: Investigating the Habituation Effect in Participants' Preferences During a Longitudinal Human-Robot Interaction Study , 2007, RO-MAN 2007 - The 16th IEEE International Symposium on Robot and Human Interactive Communication.

[6]  Joshua D. Greene,et al.  Our multi-system moral psychology: Towards a consensus view , 2009 .

[7]  Darren Reed,et al.  Towards a Sociological Understanding of Robots as Companions , 2010, HRPR.

[8]  I. Kant,et al.  Groundwork for the Metaphysics of Morals , 2002 .

[9]  Francis Hutcheson,et al.  An Essay on the Nature and Conduct of the Passions and Affections: With Illustrations on the Moral Sense , 1972 .

[10]  M. Lerner,et al.  Observer's reaction to the "innocent victim": compassion or rejection? , 1966, Journal of personality and social psychology.

[11]  Kumar Yogeeswaran,et al.  The bionic blues: Robot rejection lowers self-esteem , 2018, Comput. Hum. Behav..

[12]  Joshua D. Greene The Cognitive Neuroscience of Moral Judgment , 2010 .

[13]  Y. Wilks,et al.  Book Review: Close Engagements with Artificial Companions: Key Social, Psychological, Ethical, and Design Issues edited by Yorick Wilks , 2010, CL.

[14]  Ohbyung Kwon,et al.  Human likeness: cognitive and affective factors affecting adoption of robot-assisted learning systems , 2016, New Rev. Hypermedia Multim..

[15]  Wendell Wallach,et al.  Machine morality: bottom-up and top-down approaches for modelling human moral faculties , 2008, AI & SOCIETY.

[16]  P. Bentler,et al.  Significance Tests and Goodness of Fit in the Analysis of Covariance Structures , 1980 .

[17]  J. Haidt The New Synthesis in Moral Psychology , 2007, Science.

[18]  L. A. Brown,et al.  Prologue: Archaeology, Animism and Non-Human Agents , 2008 .

[19]  Pentti O. A. Haikonen Robot Brains: Circuits and Systems for Conscious Machines , 2007 .

[20]  M. Browne,et al.  Alternative Ways of Assessing Model Fit , 1992 .

[21]  R. Chadwick,et al.  Friendship, Altruism and Morality , 2009 .

[22]  Sung-Hoon Ahn,et al.  Review of manufacturing processes for soft biomimetic robots , 2009 .

[23]  J. G. Holmes,et al.  Trust in close relationships. , 1985 .

[24]  B. Skyrms,et al.  Evolution of Moral Norms , 2008 .

[25]  J. Bryson Robots should be slaves , 2010 .

[26]  E. Deci,et al.  The support of autonomy and the control of behavior. , 1987, Journal of personality and social psychology.

[27]  D. Campbell,et al.  Convergent and discriminant validation by the multitrait-multimethod matrix. , 1959, Psychological bulletin.

[28]  Patrick Lin,et al.  Robot Ethics: The Ethical and Social Implications of Robotics , 2011 .

[29]  James S. Albus,et al.  The Engineering of Mind , 1996, Inf. Sci..

[30]  K. Himma Artificial agency, consciousness, and the criteria for moral agency: what properties must an artificial agent have to be a moral agent? , 2009, Ethics and Information Technology.

[31]  P. Ohler,et al.  Venturing into the uncanny valley of mind—The influence of mind attribution on the acceptance of human-like characters in a virtual reality setting , 2017, Cognition.

[32]  Clifford Nass,et al.  Computers are social actors , 1994, CHI '94.

[33]  Clifford Nass,et al.  The media equation - how people treat computers, television, and new media like real people and places , 1996 .

[34]  David Hume An Inquiry Concerning the Principles of Morals , 2006 .

[35]  Peter Goldie,et al.  Emotions, feelings and intentionality , 2002 .

[36]  Cynthia Breazeal,et al.  Robots at home: Understanding long-term human-robot interaction , 2008, 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[37]  Nicole C. Krämer,et al.  The Uncanny in the Wild. Analysis of Unscripted Human–Android Interaction in the Field , 2014, Int. J. Soc. Robotics.

[38]  Rosalind W. Picard Synthetic Emotion , 2000, IEEE Computer Graphics and Applications.

[39]  Jonathan D. Cohen,et al.  An fMRI Investigation of Emotional Engagement in Moral Judgment , 2001, Science.

[40]  Patrick Lin,et al.  Compassionate AI and Selfless Robots: A Buddhist Approach , 2012 .

[41]  Thomas M. Powers Prospects for a Kantian Machine , 2006, IEEE Intelligent Systems.

[42]  Sara B. Kiesler,et al.  Mental models of robotic assistants , 2002, CHI Extended Abstracts.

[43]  Gabrielle Durepos Reassembling the Social: An Introduction to Actor‐Network‐Theory , 2008 .

[44]  Luciano Floridi,et al.  On the Morality of Artificial Agents , 2004, Minds and Machines.

[45]  Dana Kulic,et al.  Measurement Instruments for the Anthropomorphism, Animacy, Likeability, Perceived Intelligence, and Perceived Safety of Robots , 2009, Int. J. Soc. Robotics.

[46]  Selmer Bringsjord,et al.  Toward a General Logicist Methodology for Engineering Ethically Correct Robots , 2006, IEEE Intelligent Systems.

[47]  Ralf Schwarzer,et al.  Self-Efficacy : Thought Control Of Action , 1992 .

[48]  Steffen Steinert,et al.  The Five Robots—A Taxonomy for Roboethics , 2014, Int. J. Soc. Robotics.

[49]  J. Haidt,et al.  Intuitive ethics: how innately prepared intuitions generate culturally variable virtues , 2004, Daedalus.

[50]  Andrés Montoyo,et al.  Advances on natural language processing , 2007, Data Knowl. Eng..

[51]  B. Muthén,et al.  Assessing Reliability and Stability in Panel Models , 1977 .

[52]  D. Byrne Interpersonal attraction and attitude similarity. , 1961, Journal of abnormal and social psychology.

[53]  Matthias Scheutz,et al.  How Robots Can Affect Human Behavior: Investigating the Effects of Robotic Displays of Protest and Distress , 2014, Int. J. Soc. Robotics.

[54]  Wendell Wallach,et al.  Why Machine Ethics? , 2006, IEEE Intelligent Systems.

[55]  Jeremy Rose,et al.  Machine Agency as Perceived Autonomy: An Action Perspective , 2000, Organizational and Social Perspectives on IT.

[56]  J. Piaget,et al.  The Moral Judgement of the Child , 1977 .

[57]  Albert Bandura,et al.  Exercise of personal agency through the self-efficacy mechanism. , 1992 .

[58]  I. Ajzen,et al.  Predicting and Changing Behavior: The Reasoned Action Approach , 2009 .

[59]  A. Leslie A theory of agency. , 1995 .

[60]  J. Burger Obedience to Authority , 2011 .

[61]  W. Bukowski,et al.  Friendship and morality: (How) are they related? , 1996 .

[62]  S. Shyam Sundar,et al.  The Hollywood Robot Syndrome media effects on older adults' attitudes toward robots and adoption intentions , 2016, 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[63]  G. Lakoff Metaphor, Morality, and Politics Or, Why Conservatives Have Left Liberals In the Dust 1 , 1995 .

[64]  R. Janoff-Bulman,et al.  Proscriptive versus prescriptive morality: two faces of moral regulation. , 2009, Journal of personality and social psychology.

[65]  A. Bandura Social cognitive theory of self-regulation☆ , 1991 .

[66]  Francesco Ferrari,et al.  Blurring Human–Machine Distinctions: Anthropomorphic Appearance in Social Robots as a Threat to Human Distinctiveness , 2016, International Journal of Social Robotics.

[67]  Heloir,et al.  The Uncanny Valley , 2019, The Animation Studies Reader.

[68]  Markus Appel,et al.  Meaning Through Fiction: Science Fiction and Innovative Technologies. , 2016 .

[69]  J. Haidt The emotional dog and its rational tail: a social intuitionist approach to moral judgment. , 2001, Psychological review.

[70]  Mary M. Omodei,et al.  Need satisfaction and involvement in personal projects: Toward an integrative model of subjective well-being. , 1990 .

[71]  Sara B. Kiesler,et al.  Human Mental Models of Humanoid Robots , 2005, Proceedings of the 2005 IEEE International Conference on Robotics and Automation.

[72]  Brian Scassellati,et al.  The Benefits of Interactions with Physically Present Robots over Video-Displayed Agents , 2011, Int. J. Soc. Robotics.

[73]  Deborah G. Johnson Computer systems: Moral entities but not moral agents , 2006, Ethics and Information Technology.

[74]  A. D. Jones,et al.  Obedience to Authority , 1974 .

[75]  David J. Gunkel The Machine Question: Critical Perspectives on AI, Robots, and Ethics , 2012 .

[76]  Wlodzislaw Duch Robot Brains. Circuits and Systems for Conscious Machines (P. Haikonen; 2007) [Book review] , 2008, IEEE Trans. Neural Networks.

[77]  R. A. Duff,et al.  Friendship, altruism, and morality , 1982 .

[78]  I. Bogost Alien Phenomenology, or What It’s Like to Be a Thing , 2012 .

[79]  Principia Ethica , 1922, Nature.

[80]  Mark Coeckelbergh,et al.  Personal Robots, Appearance, and Human Good: A Methodological Reflection on Roboethics , 2009, Int. J. Soc. Robotics.

[81]  Ron Tamborini,et al.  Testing a Dual-Process Model of Media Enjoyment and Appreciation , 2014 .

[82]  J. Mccroskey,et al.  The measurement of interpersonal attraction , 1974 .

[83]  David Westerman,et al.  Welcoming Our Robot Overlords: Initial Expectations About Interaction With a Robot , 2014 .

[84]  J. Savulescu,et al.  Cognitive biases can affect moral intuitions about cognitive enhancement , 2014, Front. Syst. Neurosci..

[85]  Petra Himmel,et al.  The Oxford Handbook Of Philosophy Of Biology , 2016 .

[86]  Maferima Touré-Tillery,et al.  Who or What to Believe: Trust and the Differential Persuasiveness of Human and Anthropomorphized Messengers , 2015 .

[87]  P. Bentler,et al.  Cutoff criteria for fit indexes in covariance structure analysis : Conventional criteria versus new alternatives , 1999 .

[88]  David J. Gunkel A Vindication of the Rights of Machines , 2014, Machine Ethics and Robot Ethics.

[89]  E. Deci,et al.  Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. , 2000, The American psychologist.

[90]  A. Bandura Toward a Psychology of Human Agency , 2006, Perspectives on psychological science : a journal of the Association for Psychological Science.

[91]  John P. Sullins When Is a Robot a Moral Agent , 2006 .

[92]  L. Cronbach,et al.  Construct validity in psychological tests. , 1955, Psychological bulletin.

[93]  John Mikhail,et al.  Universal moral grammar: theory, evidence and the future , 2007, Trends in Cognitive Sciences.

[94]  Ray Oldenburg,et al.  The great good place : cafés, coffee shops, community centers, beauty parlors, general stores, bars, hangouts, and how they get you through the day , 1991 .

[95]  B. Malle,et al.  The Folk Concept of Intentionality , 1997 .

[96]  Ken Hollings Metropolis (Motion picture) , 2002 .

[97]  E. Pancsofar The Great Good Place: Cafes, Coffee Shops, Community Centers, Beauty Parlors, General Stores, Bars, Hangouts, and How They Get You Through the Day , 1996 .