Do We Trust in AI? Role of Anthropomorphism and Intelligence

ABSTRACT AI applications are radically transforming the manner in which service providers and consumers interact. We explore how the humanness of AI applications affects consumers’ trust in these applications. Qualitative evidence collected with focus groups provides fresh insights into the roles of anthropomorphism and intelligence, as key constructs representing humanness. Our findings reveal the consumers’ perspective on the nuances of these constructs pertaining to services enabled by AI applications. It also extends current understanding of the phenomenon of the “uncanny valley,” by identifying conditions under which consumers experience discomfort and uneasiness as AI humanness increases in service environments.

[1]  J. Cacioppo,et al.  On seeing human: a three-factor theory of anthropomorphism. , 2007, Psychological review.

[2]  Christopher A. Miller,et al.  Trust and etiquette in high-criticality automated systems , 2004, CACM.

[3]  O. Ferrell,et al.  Artificial intelligence: disrupting what we know about services , 2020 .

[4]  Izak Benbasat,et al.  Evaluating Anthropomorphic Product Recommendation Agents: A Social Relationship Perspective to Designing Information Systems , 2009, J. Manag. Inf. Syst..

[5]  Rokhshad Tavakoli,et al.  Netnography in tourism – Beyond Web 2.0 , 2018, Annals of Tourism Research.

[6]  Edward F. Fern,et al.  Advanced focus group research , 2001 .

[7]  C. Crespigny,et al.  Analysing group interaction in focus group research: Impact on content and the role of the moderator , 1970 .

[8]  Iyad Rahwan,et al.  Behavioural evidence for a transparency–efficiency tradeoff in human–machine cooperation , 2019, Nature Machine Intelligence.

[9]  Lu Lu,et al.  Consumers acceptance of artificially intelligent (AI) device use in service delivery , 2019, Int. J. Inf. Manag..

[10]  Maya B. Mathur,et al.  Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley , 2016, Cognition.

[11]  Nils J. Nilsson,et al.  Artificial Intelligence , 1974, IFIP Congress.

[12]  Thomas R. Lindlof Qualitative Communication Research Methods , 1994 .

[13]  Dana Kulic,et al.  Measurement Instruments for the Anthropomorphism, Animacy, Likeability, Perceived Intelligence, and Perceived Safety of Robots , 2009, Int. J. Soc. Robotics.

[14]  J. H. Davis,et al.  An Integrative Model Of Organizational Trust , 1995 .

[15]  Maferima Touré-Tillery,et al.  Who or What to Believe: Trust and the Differential Persuasiveness of Human and Anthropomorphized Messengers , 2015 .

[16]  Indrit Troshani,et al.  Managing SaaS Risk in Higher Education Organisations: A Case Study , 2013, Int. J. E Bus. Res..

[17]  Demis Hassabis,et al.  Mastering the game of Go without human knowledge , 2017, Nature.

[18]  Karl F. MacDorman,et al.  Revisiting the uncanny valley theory: Developing and validating an alternative to the Godspeed indices , 2010, Comput. Hum. Behav..

[19]  L. Meyer-Waarden,et al.  How Service Quality Influences Customer Acceptance and Usage of Chatbots? , 2020 .

[20]  Derek C. Rose,et al.  Deep Machine Learning - A New Frontier in Artificial Intelligence Research [Research Frontier] , 2010, IEEE Computational Intelligence Magazine.

[21]  C. Perry,et al.  Qualitative Marketing Research , 2001 .

[22]  Andreas M. Kaplan,et al.  Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence , 2019, Business Horizons.

[23]  Devon S. Johnson,et al.  Cognitive and affective trust in service relationships , 2005 .

[24]  Edward F. Fern The use of Focus Groups for Idea Generation: The Effects of Group Size, Acquaintanceship, and Moderator on Response Quantity and Quality , 1982 .

[25]  Ipke Wachsmuth,et al.  The Concept of Intelligence in AI , 2000 .

[26]  Karl F. MacDorman,et al.  Measuring the Uncanny Valley Effect , 2017, Int. J. Soc. Robotics.

[27]  J. Harding Qualitative Data Analysis from Start to Finish , 2013 .

[28]  Jochen Wirtz,et al.  Brave new world: service robots in the frontline , 2018, Journal of Service Management.

[29]  Roland T. Rust,et al.  Engaged to a Robot? The Role of AI in Service , 2020, Journal of Service Research.

[30]  P. Pavlou,et al.  Perceived Information Security, Financial Liability and Consumer Trust in Electronic Commerce Transactions , 2002 .

[31]  P. A. Dabholkar,et al.  Consequences of Forcing Consumers to Use Technology-Based Self-Service , 2008 .

[32]  S. Hunt,et al.  The Commitment-Trust Theory of Relationship Marketing , 1994 .

[33]  Richard A. Krueger,et al.  When to Use Focus Groups and Why , 1993 .

[34]  Eric Schniter,et al.  Trust in humans and robots: Economically similar but emotionally different , 2020, Journal of Economic Psychology.

[35]  Indrit Troshani,et al.  Contemporary Consumer Health Informatics , 2016 .

[36]  Fred D. Davis Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology , 1989, MIS Q..

[37]  Brian R. Duffy,et al.  Anthropomorphism and the social robot , 2003, Robotics Auton. Syst..

[38]  Evert Gummesson,et al.  Qualitative research in marketing , 2005 .

[39]  Detmar W. Straub,et al.  Trust and TAM in Online Shopping: An Integrated Model , 2003, MIS Q..

[40]  Keng Siau,et al.  Building Trust in Artificial Intelligence, Machine Learning, and Robotics , 2018 .

[41]  Gary Anthes,et al.  Artificial intelligence poised to ride a new wave , 2017, Commun. ACM.

[42]  Yogesh Kumar Dwivedi,et al.  Artificial intelligence for decision making in the era of Big Data - evolution, challenges and research agenda , 2019, Int. J. Inf. Manag..

[43]  Daniel Baier,et al.  Siri, Do I like You? Digital Voice Assistants and Their Acceptance by Consumers , 2020 .

[44]  S. Thompson Social Learning Theory , 2008 .

[45]  Naresh K. Malhotra,et al.  Marketing Research: An Applied Orientation , 1993 .

[46]  J. Rotter A new scale for the measurement of interpersonal trust. , 1967, Journal of personality.

[47]  I. Troshani,et al.  Perceived Control and Perceived Risk in Self-service Technology Recovery , 2020, J. Comput. Inf. Syst..

[48]  Bruce A. MacDonald,et al.  Does the Robot Have a Mind? Mind Perception and Attitudes Towards Robots Predict Use of an Eldercare Robot , 2014, Int. J. Soc. Robotics.

[49]  C. Debout,et al.  [The focus group]. , 2014, Soins; la revue de reference infirmiere.

[50]  N. Epley,et al.  The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle , 2014 .

[51]  Paul A. Pavlou,et al.  Evidence of the Effect of Trust Building Technology in Electronic Markets: Price Premiums and Buyer Behavior , 2002, MIS Q..

[52]  Kerstin Dautenhahn,et al.  Methodology & Themes of Human-Robot Interaction: A Growing Research Field , 2007 .

[53]  John D. Lee,et al.  Trust in Automation: Designing for Appropriate Reliance , 2004, Hum. Factors.

[54]  Sara Moussawi Investigating Personal Intelligent Agents in Everyday Life through a Behavioral Lens , 2016 .

[55]  Oussama Khatib,et al.  Springer Handbook of Robotics , 2007, Springer Handbooks.

[56]  Roland T. Rust,et al.  Artificial Intelligence in Service , 2018 .

[57]  C. Johnson-George,et al.  Measurement of specific interpersonal trust: Construction and validation of a scale to assess trust in a specific other. , 1982 .

[58]  Dhruv Grewal,et al.  How artificial intelligence will change the future of marketing , 2019, Journal of the Academy of Marketing Science.

[59]  Theo Araujo,et al.  Living up to the chatbot hype: The influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions , 2018, Comput. Hum. Behav..

[60]  Carey K. Morewedge,et al.  Journal of Personality and Social Psychology Making Sense by Making Sentient : Effectance Motivation Increases , 2010 .

[61]  Heetae Yang,et al.  Understanding adoption of intelligent personal assistants: A parasocial relationship perspective , 2018, Ind. Manag. Data Syst..

[62]  Gobinda G. Chowdhury,et al.  Natural language processing , 2005, Annu. Rev. Inf. Sci. Technol..

[63]  S. Hunt,et al.  The Commitment-Trust Theory of Relationship Marketing , 1994 .

[64]  Janet Mancini Billson,et al.  Focus Groups: A Practical Guide for Applied Research , 1989 .

[65]  A. Michael Huberman,et al.  An expanded sourcebook qualitative data analysis , 1994 .

[66]  Karl F. MacDorman,et al.  Too real for comfort? Uncanny responses to computer generated faces , 2009, Comput. Hum. Behav..

[67]  C. Nass,et al.  Machines and Mindlessness , 2000 .

[68]  Gordon B. Davis,et al.  User Acceptance of Information Technology: Toward a Unified View , 2003, MIS Q..

[69]  C. Herington,et al.  Focus group exploration of firm‐employee relationship strength , 2005 .

[70]  J. G. Holmes,et al.  Trust in close relationships. , 1985 .

[71]  Shahriar Akter,et al.  The Impact of Artificial Intelligence on Branding: A Bibliometric Analysis (1982-2019) , 2021, J. Glob. Inf. Manag..

[72]  Sergios Dimitriadis,et al.  Linking trust to use intention for technology-enabled bank channels: The role of trusting intentions† , 2010 .

[73]  Panagiotis Kanellis,et al.  Trust and relationship building in electronic commerce , 2001, Internet Res..

[74]  Stefanie Paluch,et al.  Artificial Intelligence and Robots in the Service Encounter , 2020, Journal of Service Management Research.

[75]  Marta Díaz,et al.  Building up child-robot relationship for therapeutic purposes: From initial attraction towards long-term social engagement , 2011, Face and Gesture 2011.

[76]  Deborah L. McGuinness,et al.  An Intelligent Personal Assistant for Task and Time Management , 2007, AI Mag..

[77]  Geoffrey E. Hinton,et al.  Deep Learning , 2015, Nature.

[78]  Giselle Rampersad,et al.  Examining network factors: commitment, trust, coordination and harmony , 2010 .

[79]  F. Reichheld,et al.  E-LOYALTY: YOUR SECRET WEAPON ON THE WEB , 2003 .

[80]  Christine Nadel,et al.  Case Study Research Design And Methods , 2016 .