Metrics and Measures for Intelligence Analysis Task Difficulty

Recent workshops and conferences supporting the intelligence community (IC) have highlighted the need to characterize the difficulty or complexity of intelligence analysis (IA) tasks in order to facilitate assessments of the impact or effectiveness of IA tools that are being considered for introduction into the IC. Some fundamental issues are: (a) how to employ rigorous methodologies in evaluating tools, given a host of problems such as controlling for task difficulty, effects of time or learning, small-sample size limitations; (b) how to measure the difficulty/complexity of IA tasks in order to establish valid experimental/quasi-experimental designs aimed to support evaluation of tools; and (c) development of more rigorous (summative), performance-based measures of human performance during the conduct of IA tasks, beyond the more traditional reliance on formative assessments (e.g., subjective ratings). Invited discussants will be asked to comment on one or more of these issues, with the aim of bringing the most salient issues and research needs into focus

[1]  Cecilia Schtdz Hargrove Beauty Is More Than Skin Deep , 1949 .

[2]  Emilie M. Roth,et al.  Predicting Vulnerabilities in Computer-Supported Inferential Analysis under Data Overload , 2001, Cognition, Technology & Work.

[3]  Jean Scholtz,et al.  Glass Box: An Instrumented Infrastructure for Supporting Human Interaction with Information , 2005, Proceedings of the 38th Annual Hawaii International Conference on System Sciences.