Humane Anthropomorphic Agents : the Quest for the Outcome Measure

Artificial intelligence has become an integral part of our daily lives. Today, we engage with intelligent agents at home, on the street, and at work. Rapid advances in technological capabilities make such intelligent agents increasingly human-like. Anthropomorphic agents are characterized by a high degree of socialness, intelligence, and efficiency. They afford many opportunities (e.g., convenience, availability, automation) yet also bring along potential negative impacts on human users, such as uninformed decision making, loss of control, or lack of transparency. Thus, anthropomorphic agents mark a new quality of human-computer interaction that should consider values and ethics in their design process and outcome. However, typical outcomes to measure the quality of an intelligent agent from a user-centric perspective are limited to accessibility, usability, or user experience. In this position paper, we argue that in the design of anthropomorphic agents, we need to go beyond established HCI measures in order to emphasize ethics and values in the digital age. Thus, we propose a new outcome measure called “humaneness” as a foundation for understanding and designing humane anthropomorphic agents.

[1]  Anna-Maria Seeger,et al.  Designing Anthropomorphic Conversational Agents: Development and Empirical Evaluation of a Design Framework , 2018, ICIS.

[2]  Batya Friedman,et al.  Human values and the design of computer technology , 1997 .

[3]  G. Elder Time, human agency, and social change: perspectives on the life course , 1994 .

[4]  Bernd Carsten Stahl,et al.  Philosophy and information systems: where are we and where should we go? , 2018, Eur. J. Inf. Syst..

[5]  Peter Norvig,et al.  Artificial Intelligence: A Modern Approach , 1995 .

[6]  Florian Jentsch,et al.  An interdisciplinary taxonomy of social cues and signals in the service of engineering robotic social intelligence , 2014, Defense + Security Symposium.

[7]  Jan R. Landwehr,et al.  It's Got the Look: The Effect of Friendly and Aggressive “Facial” Expressions on Product Liking and Sales , 2011 .

[8]  Tim Miller,et al.  Explanation in Artificial Intelligence: Insights from the Social Sciences , 2017, Artif. Intell..

[9]  Guy L. Lacroix,et al.  Does the uncanny valley exist? An empirical test of the relationship between eeriness and the human likeness of digitally created faces , 2013, Comput. Hum. Behav..

[10]  Ephraim R. McLean,et al.  Information Systems Success: The Quest for the Dependent Variable , 1992, Inf. Syst. Res..

[11]  Guigang Zhang,et al.  Deep Learning , 2016, Int. J. Semantic Comput..

[12]  Jianfeng Gao,et al.  Deep Reinforcement Learning for Dialogue Generation , 2016, EMNLP.

[13]  Alexander Benlian,et al.  Anthropomorphic Information Systems , 2019, Business & Information Systems Engineering.

[14]  Kenneth Webb,et al.  Evolution of Communication Simulation of Adaptive Behavior – Project Report , 2004 .

[15]  L. Taylor,et al.  Human Agency in Social Cognitive Theory , 1989 .

[16]  Jonathan Gratch,et al.  Reading people's minds from emotion expressions in interdependent decision making. , 2014, Journal of personality and social psychology.

[17]  Patrick Gebhard,et al.  MARSSI: Model of Appraisal, Regulation, and Social Signal Interpretation , 2018, AAMAS.

[18]  Yaser Sheikh,et al.  Towards Social Artificial Intelligence: Nonverbal Social Signal Prediction in a Triadic Interaction , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[19]  Cynthia Breazeal,et al.  Machine behaviour , 2019, Nature.

[20]  Ephraim R. McLean,et al.  The DeLone and McLean Model of Information Systems Success: A Ten-Year Update , 2003, J. Manag. Inf. Syst..

[21]  Oliver Hinz,et al.  Investment Decisions with Robo-Advisors: the Role of anthropomorphism and Personalized Anchors in Recommendations , 2019, ECIS.

[22]  Alfred Kobsa,et al.  The Effects of Collaboration and System Transparency on CIVE Usage: An Empirical Study and Model , 2005, Presence: Teleoperators & Virtual Environments.

[23]  Ass,et al.  Can computers be teammates? , 1996 .

[24]  C. Nass,et al.  Are People Polite to Computers? Responses to Computer-Based Interviewing Systems1 , 1999 .

[25]  Jacob L. Mey,et al.  Methods And Practice In Cognitive Technology: A question of questions , 1999 .

[26]  Alexander Maedche,et al.  A Taxonomy of Social Cues for Conversational Agents , 2019, Int. J. Hum. Comput. Stud..

[27]  Jakob Nielsen,et al.  Enhancing the explanatory power of usability heuristics , 1994, CHI '94.

[28]  Theo Araujo,et al.  Living up to the chatbot hype: The influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions , 2018, Comput. Hum. Behav..

[29]  P. Aggarwal,et al.  Is That Car Smiling at Me? Schema Congruity as a Basis for Evaluating Anthropomorphized Products , 2007 .

[30]  Alan Borning,et al.  Value Sensitive Design and Information Systems , 2020, The Ethics of Information Technologies.

[31]  Sara Kim,et al.  Don’t Want to Look Dumb? The Role of Theories of Intelligence and Humanlike Features in Online Help Seeking , 2018, Psychological science.

[32]  Jakob Nielsen,et al.  Heuristic evaluation of user interfaces , 1990, CHI '90.

[33]  S. Shyam Sundar,et al.  Anthropomorphism of computers: Is it mindful or mindless? , 2012, Comput. Hum. Behav..

[34]  Eva Hudlicka,et al.  To feel or not to feel: The role of affect in human-computer interaction , 2003, Int. J. Hum. Comput. Stud..

[35]  R. Dalton Lion man takes pride of place as oldest statue , 2003, Nature.

[36]  Gyoo Gun Lim,et al.  Design and Validation of the Bright Internet , 2018, J. Assoc. Inf. Syst..

[37]  N. Epley,et al.  The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle , 2014 .

[38]  John Woods,et al.  Survey on Chatbot Design Techniques in Speech Conversation Systems , 2015 .

[39]  Alexander Benlian,et al.  Mitigating the intrusive effects of smart home assistants by using anthropomorphic design features: A multimethod investigation , 2019, Inf. Syst. J..

[40]  Or Biran,et al.  Explanation and Justification in Machine Learning : A Survey Or , 2017 .

[41]  Emilee J. Rader,et al.  Explanations as Mechanisms for Supporting Algorithmic Transparency , 2018, CHI.

[42]  C. Nass,et al.  Machines and Mindlessness , 2000 .

[43]  Henner Gimpel,et al.  Risks and Side effects of Digitalization: a Multi-Level Taxonomy of the Adverse effects of using Digital Technologies and Media , 2019, ECIS.

[44]  Geoff Walsham,et al.  Toward Ethical Information Systems: The Contribution of Discourse Ethics , 2010, MIS Q..

[45]  J. Horvat THE ETHICS OF ARTIFICIAL INTELLIGENCE , 2016 .

[46]  J. Cacioppo,et al.  On seeing human: a three-factor theory of anthropomorphism. , 2007, Psychological review.

[47]  J. Burgoon,et al.  Interactivity in human–computer interaction: a study of credibility, understanding, and influence , 2000 .

[48]  Michael D. Myers,et al.  A set of ethical principles for design science research in information systems , 2014, Inf. Manag..

[49]  Illah R. Nourbakhsh,et al.  A survey of socially interactive robots , 2003, Robotics Auton. Syst..

[50]  Michael I. Jordan,et al.  Machine learning: Trends, perspectives, and prospects , 2015, Science.

[51]  Catherine Pelachaud,et al.  A User Perception--Based Approach to Create Smiling Embodied Conversational Agents , 2017, ACM Trans. Interact. Intell. Syst..

[52]  Matthias Söllner,et al.  AI-Based Digital Assistants , 2019, Business & Information Systems Engineering.

[53]  S. Athar Principles of Biomedical Ethics , 2011, The Journal of IMA.

[54]  A. David Marshall,et al.  Computational Paralinguistics: Automatic Assessment of Emotions, Mood and Behavioural State from Acoustics of Speech , 2018, INTERSPEECH.

[55]  R. Johnstone,et al.  Animal signals , 2013, Current Biology.