Automated Support for AARs: Exploiting Communication to Assess Team Performance

The After Action Review (AAR) process provides a powerful methodology that in the context of training maximizes the benefits of exercises by enabling a unit to learn from experience by systematically reflecting on their strengths and weaknesses. We have developed a tool that supports the AAR process, essentially extending an Observer Controller’s (O/C) reach automatically. This tool was developed with two training contexts in mind: live STX lane convoy training at the National Training Center (NTC) and simulated convoy training using DARWARS Ambush! at the Mission Support Training Facility at Fort Lewis. At NTC, live radio communication is captured during training, while with Ambush! communication using voice over IP (VOIP) is recorded. The tool automatically converts recorded speech to text and then analyzes the text, using advanced statistical machine learning technologies, to determine a unit’s performance and identify critical incidents, leading indicators, and other training events that could be included in an AAR. We worked closely with Subject Matter Experts (SMEs) to derive the important dimensions of performance allowing the tool to support a wide range of O/C and commander AARs. The tool rates a unit on several scales based on a mission essential task list (METL), including command and control, situation understanding, use of standard operating procedures (SOPs), and battle drills. For each rating scale, the tool selects appropriate training events that reflect the unit’s range of performance from untrained through practiced to trained. The tool’s interface makes it easy to spot performance weaknesses at a glance and then to drill down to understand these weaknesses by listening to the relevant radio communication. The tool also enables commanders to create a custom AAR by selecting events of interest and the associated radio communication and then adding their own comments.

[1]  Patrick F. Reidy An Introduction to Latent Semantic Analysis , 2009 .

[2]  John E. Morrison,et al.  Foundations of the After Action Review Process , 1999 .

[3]  Andreas Stolcke,et al.  Dialogue act modeling for automatic tagging and recognition of conversational speech , 2000, CL.

[4]  Richard A. Harshman,et al.  Indexing by Latent Semantic Analysis , 1990, J. Am. Soc. Inf. Sci..

[5]  Eduardo Salas,et al.  Team Performance Assessment and Measurement: Theory, Methods, and Applications. Series in Applied Psychology. , 1997 .

[6]  Nancy J. Cooke,et al.  Measuring Speech Flow of Co-Located and Distributed Command and Control Teams During a Communication Channel Glitch , 2004 .

[7]  Peter W. Foltz,et al.  Some Promising Results of Communication-Based Automatic Measures of Team Cognition , 2002 .

[8]  C R Paris,et al.  Teamwork in multi-person systems: a review and analysis , 2000, Ergonomics.

[9]  Mark G. Core Analyzing and Predicting Patterns of DAMSL Utterance Tags , 2002 .

[10]  Bob Rehder,et al.  How Well Can Passage Meaning be Derived without Using Word Order? A Comparison of Latent Semantic Analysis and Humans , 1997 .

[11]  Peter W. Foltz,et al.  Automated Speech Recognition for Modeling Team Performance , 2003 .

[12]  Peter W. Foltz,et al.  Latent semantic analysis for text-based research , 1996 .

[13]  Peter W. Foltz,et al.  Automated Team Discourse Modeling: Test of Performance and Generalization , 2006 .

[14]  Peter W. Foltz,et al.  Evaluation of Latent Semantic Analysis-Based Measures of Team Communications Content , 2003 .

[15]  Winston Bennett,et al.  Latent Semantic Analysis for Career Field Analysis and Information Operations , 2002 .

[16]  Jennifer Chu-Carroll,et al.  A Statistical Model for Discourse Act Recognition in Dialogue Interactions , 1998 .