Designing for Appropriate Trust in Automated Vehicles

Automated vehicles (AVs) have become a popular area of research due to, among others, claims of increased traffic safety and user comfort. However, before a user can reap the benefits, they must first trust the AV. Trust in AVs has gained a greater interest in recent years due to being a prerequisite for user acceptance, adoption as well as important for good user experience. However, it is not about creating trust in AVs, as much as creating an appropriate level of trust in relation to the actual performance of the AV. However, little research has presented a systematic and holistic approach that may assist developers in the design process to understand what to primarily focus on and how, when developing AVs that assist users to generate an appropriate level of trust.  This thesis presents two mixed-method studies (Study I and II). The first study considers what factors affect users trust in the AV and is primarily based on a literature review as well as a complementary user study. The second study, a user study, is built upon Study I and uses a Wizard of Oz (WOz) approach with the purpose to understand how the behaviour of an AV affects users trust in a simulated but realistic context, including seven day-to-day traffic situations. The results show that trust is primarily affected by information from and about the AV. Furthermore, results also show that trust in AVs have primarily four different phases, before the user’s first physical interaction with the AV (i), during usage and whilst learning how the AV performs (ii), after the user has learned how the AV performs in a specific context (iii) and after the user has learned how the AV performs in a specific context but that context changes (iv). It was also found that driving behaviour affects the user’s trust in the AV during usage and whilst learning how the AV performs. This was primarily due to how well the driving behaviour communicated intentions for the users’ to be able to predict upcoming AV actions. The users’ were also affected by the perceived benevolence of the AV, that is how respectful the driving behaviour was interpreted by the user. Finally, the results also showed that the user’s trust in the AV also is affected by aspects relating to different traffic situations such as perceived task difficulty, perceived risk for oneself (and others) and how well the AV conformed to the user’s expectations. Thus, it is not only how the AV performs but rather how the AV performs in relation to different traffic situations. Finally, since design research not only considers how things are, but also how things ought to be, a tentative explanatory and prescriptive model was developed based on the results presented above. The model of trust information exchange and gestalt explains how information affecting user trust, travels from a trust information sender to a trust information receiver and highlights the important aspects for developers to consider designing for appropriate trust in AVs, such as the design space and related variables. The design variables are a) the message (the type and amount of information), b) the artefact (the AV, including communication channels and properties) and c) the information gestalt, which is based on the combination of signals communicated from the properties (and communication channels). In this case, the gestalt is what the user ultimately perceives; the combined result of all signals. Therefore, developers need to consider not only how individual signals are perceived and interpreted, but also how different signals are perceived and interpreted together, as a whole, an information gestalt.

[1]  S. Siegel,et al.  Nonparametric Statistics for the Behavioral Sciences , 2022, The SAGE Encyclopedia of Research Design.

[2]  Linda Ng Boyle,et al.  Extending the Technology Acceptance Model to assess automation , 2011, Cognition, Technology & Work.

[3]  John W. Creswell,et al.  Designing and Conducting Mixed Methods Research , 2006 .

[4]  Victor Kaptelinin,et al.  Acting with technology: Activity theory and interaction design , 2006, First Monday.

[5]  John W. Creswell,et al.  Research Design: Qualitative, Quantitative, and Mixed Methods Approaches , 2010 .

[6]  R. Happee,et al.  Automated Driving: Human-Factors Issues and Design Solutions , 2012 .

[7]  Colin G. Drury,et al.  Towards an Empirically Determined Scale of Trust in Computerized Systems: Distinguishing Concepts and Types of Trust , 1998 .

[8]  Catherine J. Stevens,et al.  A review of the applicability of robots in education , 2013 .

[9]  Victoria A Banks,et al.  Keep the driver in control: Automating automobiles of the future. , 2016, Applied ergonomics.

[10]  Philipp Wintersberger,et al.  Why do you like to drive automated?: a context-dependent analysis of highly automated driving to elaborate requirements for intelligent user interfaces , 2019, IUI.

[11]  Miguel P Caldas,et al.  Research design: qualitative, quantitative, and mixed methods approaches , 2003 .

[12]  Douglas A. Wiegmann,et al.  Effects of Information Source, Pedigree, and Reliability on Operator Interaction With Decision Support Systems , 2007, Hum. Factors.

[13]  Tibor Petzoldt,et al.  Specific Feedback Matters - The Role of Specific Feedback in the Development of Trust in Automated Driving Systems , 2019, 2019 IEEE Intelligent Vehicles Symposium (IV).

[14]  J. G. Holmes,et al.  Trust in close relationships. , 1985 .

[15]  David R. Large,et al.  A Longitudinal Simulator Study to Explore Drivers’ Behaviour During Highly-Automated Driving , 2017 .

[16]  N Moray,et al.  Trust, control strategies and allocation of function in human-machine systems. , 1992, Ergonomics.

[17]  Raja Parasuraman,et al.  Humans and Automation: Use, Misuse, Disuse, Abuse , 1997, Hum. Factors.

[18]  Christopher D. Wickens,et al.  A model for types and levels of human interaction with automation , 2000, IEEE Trans. Syst. Man Cybern. Part A.

[19]  N. L. Chervany,et al.  What is Trust? A Conceptual Analysis and an Interdisciplinary Model , 2000 .

[20]  Mark R. Lehto,et al.  Foundations for an Empirically Determined Scale of Trust in Automated Systems , 2000 .

[21]  Emeli Adell,et al.  ACCEPTANCE OF DRIVER SUPPORT SYSTEMS , 2010 .

[22]  Markus Lienkamp,et al.  Driving in an Increasingly Automated World – Approaches to Improve the Driver-automation Interaction , 2015 .

[23]  J. H. Davis,et al.  An Integrative Model Of Organizational Trust , 1995 .

[24]  Joshua E. Domeyer,et al.  Vehicle Automation–Other Road User Communication and Coordination: Theory and Mechanisms , 2020, IEEE Access.

[25]  Helena Strömberg,et al.  Anthropomorphism: An Investigation of Its Effect on Trust in Human-Machine Interfaces for Highly Automated Vehicles , 2018, Advances in Intelligent Systems and Computing.

[26]  John D. Lee,et al.  Trust in Automation: Designing for Appropriate Reliance , 2004, Hum. Factors.

[27]  Elfi Furtmueller,et al.  Using grounded theory as a method for rigorously reviewing literature , 2013, Eur. J. Inf. Syst..

[28]  Wendy Ju,et al.  Enacting metaphors to explore relations and interactions with automated driving systems , 2020 .

[29]  Stephen Marsh,et al.  The role of trust in information science and technology , 2005, Annu. Rev. Inf. Sci. Technol..

[30]  Kyongsu Yi,et al.  A Question of Trust: An Ethnographic Study of Automated Cars on Real Roads , 2016, AutomotiveUI.

[31]  Mark S. Young,et al.  A proposed psychological model of driving automation , 2000 .

[32]  Håkan Alm,et al.  Applying the "Team Player" Approach on Car Design , 2009, HCI.

[33]  I. Ajzen,et al.  Understanding Attitudes and Predicting Social Behavior , 1980 .

[34]  George Mason Situation Awareness, Mental Workload, and Trust in Automation:Viable, Empirically Supported Cognitive Engineering Constructs , 2011 .

[35]  Göran Falkman,et al.  Presenting system uncertainty in automotive UIs for supporting trust calibration in autonomous driving , 2013, AutomotiveUI.

[36]  Wilfried Kunde,et al.  Learning the “Language” of Road Users - How Shall a Self-driving Car Convey Its Intention to Cooperate to Other Human Drivers? , 2017 .

[37]  Abuse Humans and Automation : Use , Misuse , Disuse , , 2008 .

[38]  Taxonomy and definitions for terms related to driving automation systems for on-road motor vehicles , 2022 .

[39]  MariAnne Karlsson,et al.  HMI of autonomous vehicles - more than meets the eye , 2018 .

[40]  Serge Thill,et al.  The apparent intelligence of a system as a factor in situation awareness , 2014, 2014 IEEE International Inter-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA).

[41]  H. Simon,et al.  The sciences of the artificial (3rd ed.) , 1996 .

[42]  Bobbie D. Seppelt,et al.  Making adaptive cruise control (ACC) limits visible , 2007, Int. J. Hum. Comput. Stud..

[43]  Sang Joon Kim,et al.  A Mathematical Theory of Communication , 2006 .

[44]  Paul C. Schutte,et al.  The H-Metaphor as a Guideline for Vehicle Automation and Interaction , 2005 .

[45]  Andrew L. Kun,et al.  History and future of human-automation interaction , 2019, Int. J. Hum. Comput. Stud..

[46]  Cristina Becchio,et al.  Seeing mental states: An experimental strategy for measuring the observability of other minds. , 2017, Physics of life reviews.

[47]  Regina A. Pomranky,et al.  The role of trust in automation reliance , 2003, Int. J. Hum. Comput. Stud..

[48]  Robert C. Wolpert,et al.  A Review of the , 1985 .

[49]  Sidney W. A. Dekker,et al.  MABA-MABA or Abracadabra? Progress on Human–Automation Co-ordination , 2002, Cognition, Technology & Work.

[50]  Josef F. Krems,et al.  Comfort in automated driving: An analysis of preferences for different automated driving styles and their dependence on personality traits , 2018 .

[51]  Wendy Ju,et al.  Why did my car just do that? Explaining semi-autonomous driving actions to improve driver understanding, trust, and performance , 2015 .

[52]  N. Epley,et al.  The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle , 2014 .

[53]  Daniel R. Ilgen,et al.  Not All Trust Is Created Equal: Dispositional and History-Based Trust in Human-Automation Interactions , 2008, Hum. Factors.

[54]  Hans-Rüdiger Pfister,et al.  Discomfort in Automated Driving - The Disco-Scale , 2013, HCI.

[55]  Verena Prebil,et al.  [A question of trust]. , 2007, Krankenpflege. Soins infirmiers.

[56]  Clay Spinuzzi,et al.  Context and consciousness: Activity theory and human-computer interaction , 1997 .