Animating safety-critical automation logic and intent: a candidate design

Systems that require human interaction with complex automation (and vice versa) can exhibit hazardous features such as operator mental intractability and mode confusion, in addition to spawning operator attentional and motivational problems. Moreover, efforts to minimize operator error through automation often make the system vulnerable to designer error. While the role, and hence responsibility, of automation expands to assume safety-critical decisions and tasks, ultimate accountability continues to rest squarely on the human operators and observers. No automation design is fail-proof, yet automation usually induces humans to behave as if it were - particularly as automated systems become more complex and non-communicative. These are key issues that confront designers of automation and user-machine interfaces. In this paper a display design is proposed whereby both the logic and the intentions of the automation are depicted in an integrated, pattern-oriented fashion to reduce cognitive demand and facilitate user detection of automation (more fundamentally, design) error. This flight environment and display has been implemented in a PC-based simulation at MIT's Software Engineering Research Lab (SERL), and is currently being used to explore issues relating to human supervisory performance in high-threat environments.