Evaluating Expertise in a Complex Domain–Measures Based on Theory
暂无分享,去创建一个
Human factors practitioners are often concerned with defining and evaluating expertise in complex domains where there may be no agreed-upon expertise levels, no single right answers to problems, and where the observation and measurement of real-world expert performance is difficult. This paper reports the results of an experiment in which expertise was assessed in an extremely complex and demanding domain–military command decision making in tactical warfare. The hypotheses of the experiment were: 1) command decisionmaking expertise can be recognized in practice by domain experts; 2) differences in the command decisionmaking expertise of individuals can be identified even under conditions that do not fully replicate the real world; and 3) observers who are not domain experts can recognize the expert behaviors predicted by a mental-model theory about the nature of expertise. In the experiment, the expertise of military officers in developing tactical plans was assessed independently by three “super-expert” judges, and these expertise-level ratings were correlated with independent theory-based measures used by observers who were not domain experts. The results suggest that experts in a domain have a shared underlying concept of expertise in that domain even if they cannot articulate that concept, that this expertise can be elicited and measured in situations that do not completely mimic the real world, and that expertise measures based on a mental-model theory can be used effectively by observers who are not experts in the domain.
[1] J. Shanteau. The Psychology of Experts An Alternative View , 1992 .
[2] P. Powell. Expertise and Decision Support , 2013 .
[3] S. Kay. On the Nature of Expertise. , 1992 .