Assurances and Machine Self-Confidence for Enhanced Trust in Autonomous Systems

This work investigates a model-based approach to understanding how user trust evolves in systems consisting of a supervising user and an autonomous agent. This model consists of a multivariate model for user trust, and a feedback connection between user and agent. Feedback information is termed assurance, which is also shown to consist of multiple aspects concerning the state of the autonomous agent. It is argued that the closed loop interactions between user and agent can and should be designed to calibrate user trust. In order to develop design principles, it is first necessary to define the terms and salient components of these models to provide a logical framework for their interconnection. Although elements such as trust and assurance are essential in a usable autonomous system, they are also nebulous concepts with multiple meanings [1]. We provide definitions and structure that enables a systematic study of the problem. Our user trust model implies that better assurances can be designed by providing users with better insight into the ‘competency boundaries’ for key decision-making components of the autonomy. One potentially important assurance is a report of the self-confidence (i.e. selftrust) the autonomy has in its own process. We are currently developing formal computational mechanisms for assessing machine self-confidence as an assurance in the context of a probabilistic autonomous route planning.

[1]  J. Rotter Interpersonal trust, trustworthiness, and gullibility. , 1980 .

[2]  L. Zucker Production of trust: Institutional sources of economic structure, 1840–1920. , 1986 .

[3]  John D. Lee,et al.  Trust, self-confidence, and operators' adaptation to automation , 1994, Int. J. Hum. Comput. Stud..

[4]  S. Hunt,et al.  The Commitment-Trust Theory of Relationship Marketing , 1994 .

[5]  Colin Camerer,et al.  Not So Different After All: A Cross-Discipline View Of Trust , 1998 .

[6]  Christopher D. Wickens,et al.  A model for types and levels of human interaction with automation , 2000, IEEE Trans. Syst. Man Cybern. Part A.

[7]  Norman L. Chervany,et al.  What Trust Means in E-Commerce Customer Relationships: An Interdisciplinary Conceptual Typology , 2001, Int. J. Electron. Commer..

[8]  John D. Lee,et al.  Trust in Automation: Designing for Appropriate Reliance , 2004 .

[9]  Holly A. Yanco,et al.  Potential measures for detecting trust changes , 2012, 2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[10]  Holly A. Yanco,et al.  Robot confidence and trust alignment , 2013, 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[11]  Andrew S. Clare,et al.  Modeling the Impact of Operator Trust on Performance in Multiple Robot Control , 2013, AAAI Spring Symposium: Trust and Autonomous Systems.

[12]  Fumin Zhang,et al.  Human-Robot Mutual Trust in (Semi)autonomous Underwater Robots , 2014 .

[13]  Mary L. Cummings,et al.  Representing Autonomous Systems’ Self-Confidence through Competency Boundaries , 2015 .

[14]  Ugur Kuter,et al.  Computational Mechanisms to Support Reporting of Self Confidence of Automated/Autonomous Systems , 2015, AAAI Fall Symposia.

[15]  Ugur Kuter,et al.  Towards Self-Confidence in Autonomous Systems , 2016 .