Building Trust Over Time in Human-Agent Relationships

This paper aims to understand how long-term trust and distrust develop between humans and agents (smart objects). We first conducted a qualitative study to explore key factors that lead to trust and distrust, how the human-agent trust journey develops, and what roles these trust-building factors play in the journey. This qualitative study involved an open-ended questionnaire completed by 621 participants and a five-day diary study with 60 participants that examined 499 human-object and human-agent relationships. Next, we conducted a mixed methods study with 146 participants to rate the importance of key factors identified through analysis of the data collected in the first study. We contribute to the HAI community by identifying eight factors accounting for participants’ trust and distrust toward new objects, how these factors play different roles at different phases of the human-agent trust journey, and how important these factors are for both trust and distrust. We identified ebbs and flows in the human-agent trust journey over time, revealing time periods when trust is particularly vulnerable. We also found the most important factors for building trust and avoiding distrust did not entirely overlap. We discuss these findings and their implications for designing agents which need to foster trustful long-term relationships with humans.

[1]  Mayuram S. Krishnan,et al.  The Personalization Privacy Paradox: An Empirical Evaluation of Information Transparency and the Willingness to be Profiled Online for Personalization , 2006, MIS Q..

[2]  ชวิตรา ตันติมาลา Constructing Grounded Theory: A Practical Guide through Qualitative Analysis , 2017 .

[3]  Matthias Söllner,et al.  Trust in Smart Personal Assistants: A Systematic Literature Review and Development of a Research Agenda , 2020, Wirtschaftsinformatik.

[4]  Umer Farooq,et al.  Paradigm Shift from Human Computer Interaction to Integration , 2017, CHI Extended Abstracts.

[5]  Thomas Erickson Designing agents as if people mattered , 1997 .

[6]  Michael E. Raynor,et al.  The Innovator's Solution: Creating and Sustaining Successful Growth , 2003 .

[7]  Masooda Bashir,et al.  Trust in Automation , 2015, Hum. Factors.

[8]  N Moray,et al.  Trust, control strategies and allocation of function in human-machine systems. , 1992, Ergonomics.

[9]  Rachel K. E. Bellamy,et al.  Human-Agent Collaboration: Can an Agent be a Partner? , 2017, CHI Extended Abstracts.

[10]  Elgar Fleisch,et al.  Blissfully ignorant: the effects of general privacy concerns, general institutional trust, and affect in the privacy calculus , 2015, Inf. Syst. J..

[11]  S. Gregor,et al.  Measuring Human-Computer Trust , 2000 .

[12]  Clayton M. Christensen,et al.  The Innovator's Solution: Warum manche Unternehmen erfolgreicher wachsen als andere , 2018 .

[13]  S. Gosling,et al.  A very brief measure of the Big-Five personality domains , 2003 .

[14]  Nicholas R. Jennings,et al.  Intelligent agents: theory and practice , 1995, The Knowledge Engineering Review.

[15]  Mary Czerwinski,et al.  A diary study of task switching and interruptions , 2004, CHI.

[16]  Masooda N. Bashir,et al.  Trust in Automation: Integrating Empirical Evidence on Factors That Influence Trust , 2015, Hum. Factors.