Design recommendations to support automated explanation and tutoring

The after-action review is an essential component of military training exercises. The use of constructive simulations for training poses a challenge when conducting such reviews, because behavior models are typically designed to simulate satisfactorially, without explicit concern for the interrogation of synthetic entities afterward. Ideally, users could obtain knowledge about not only the choices made by a simulator’’s behavior models, but also the rationale for those choices. This requires a rich representation of behavioral knowledge within the software system. We have integrated our explainable AI system with behavior models and log information from two simulation systems. Selecting examples from these simulators, we identify areas for improvement to facilitate the automation of explanation and tutoring.