Task models for human-computer collaboration in supervisory control of teams of autonomous systems

Supervisory control of complex teams of autonomous systems, itself may require levels of autonomous decision making. A single human operator or a small number of operators attempting to maintain situational awareness and control over a large number of autonomous units may require automated assistance in overseeing such a team. This assistance may range from attention management services to outright automated control. We are developing such a capability to assist human controllers of automated vehicles. The core of the capability lies in a task model that includes information about the computer's ability to perform a task as well as the ability of the human operator. Including risk of automation in the calculation allows us to trade off risk and capability. Values can be learned through simulation and live operations. This paper describes the task models being used, the measure of automation risk, the composite metric used to make trade-off decisions and briefly describes the process of learning the critical values.