Training simulators have become increasingly popular tools for instructing humans on performance in complex environments. However, the question of how to provide individualized and scenario-specific assessment and feedback to students remains largely an open question. To maximize training efficiency, new technologies are required that assist instructors in providing individually relevant instruction. Sandia National Laboratories has shown the feasibility of automated performance assessment tools, such as the Sandia-developed Automated Expert Modeling and Student Evaluation (AEMASE) software, through proof-of-concept demonstrations, a pilot study, and an experiment. In the pilot study, the AEMASE system, which automatically assesses student performance based on observed examples of good and bad performance in a given domain, achieved a high degree of agreement with a human grader (89%) in assessing tactical air engagement scenarios. In more recent work, we found that AEMASE achieved a high degree of agreement with human graders (83-99%) for three Navy E-2 domain-relevant performance metrics. The current study provides a rigorous empirical evaluation of the enhanced training effectiveness achievable with this technology. In particular, we assessed whether giving students feedback based on automated metrics would enhance training effectiveness and improve student performance. We trained two groups of employees (differentiated by type of feedback) on a Navy E-2 simulator and assessed their performance on three domain-specific performance metrics. We found that students given feedback via the AEMASE-based debrief tool performed significantly better than students given only instructor feedback on two out of three metrics. Future work will focus on extending these developments for automated assessment of teamwork.
[1]
Albert T. Corbett,et al.
Cognitive Computer Tutors: Solving the Two-Sigma Problem
,
2001,
User Modeling.
[2]
Robert G. Abbott,et al.
Experimental Assessment of Accuracy of Automated Knowledge Capture
,
2009,
HCI.
[3]
Randy Jensen,et al.
Automatic Causal Explanation Analysis for Combined Arms Training AAR
,
2005
.
[4]
Tom Murray,et al.
Authoring Intelligent Tutoring Systems: An analysis of the state of the art
,
1999
.
[5]
Robert G. Abbott.
Automated tactics modeling: techniques and applications
,
2007
.
[6]
K. Smith-Jentsch,et al.
Team dimensional training: A strategy for guided team self-correction.
,
1998
.
[7]
Peter W. Foltz,et al.
Automated Essay Scoring: Applications to Educational Technology
,
1999
.
[8]
Eduardo Salas,et al.
Making decisions under stress: Implications for individual and team training.
,
1998
.
[9]
Robert G. Abbott.
Automated Expert Modeling for Automated Student Evaluation
,
2006,
Intelligent Tutoring Systems.
[10]
Bruce A. Draper,et al.
Behavioral Cloning of Student Pilots with Modular Neural Networks
,
2000,
ICML.