Analysis of trust in autonomy for convoy operations

With growing use of automation in civilian and military contexts that engage cooperatively with humans, the operator’s level of trust in the automated system is a major factor in determining the efficacy of the human-autonomy teams. Suboptimal levels of human trust in autonomy (TiA) can be detrimental to joint team performance. This mis-calibrated trust can manifest in several ways, such as distrust and complete disuse of the autonomy or complacency, which results in an unsupervised autonomous system. This work investigates human behaviors that may reflect TiA in the context of an automated driving task, with the goal of improving team performance. Subjects performed a simulated leaderfollower driving task with an automated driving assistant. The subjects had could choose to engage an automated lane keeping and active cruise control system of varying performance levels. Analysis of the experimental data was performed to identify contextual features of the simulation environment that correlated to instances of automation engagement and disengagement. Furthermore, behaviors that potentially indicate inappropriate TiA levels were identified in the subject trials using estimates of momentary risk and agent performance, as functions of these contextual features. Inter-subject and intra-subject trends in automation usage and performance were also identified. This analysis indicated that for poorer performing automation, TiA decreases with time, while higher performing automation induces less drift toward diminishing usage, and in some cases increases in TiA. Subject use of automation was also found to be largely influenced by course features.