Trusting your assistant

The assistant interface metaphor has the potential to shield the human user from low-level, task-specific details, while allowing the automation of the many idiosyncratic, mundane tasks falling between the capabilities of commercial software packages. However, a user will not willingly put resources (money, privacy, information) at risk unless the assistant can be trusted to carry out the task in accord with the user's goals and priorities. This risk is significant, because assistant behaviors, being idiosyncratic and highly customized, will not be as well supported or documented as is commercial software. This paper describes a solution to this problem, allowing the assistant to safely execute partially trusted behaviors and to interactively increase the user's trust in the behavior so that more of the steps can be carried out autonomously. The approach is independent of how the behavior was acquired and is based on using incremental formal validation to populate a trust library for the behavior.