Knowledge and clinical problem‐solving

Summary. A consistent finding in the literature on measures of clinical problem‐solving scores is that there are very low correlations across different problems. This phenomenon is commonly labelled ‘content‐specificity’, implying that the scores differ because the content knowledge necessary to solve the problems differs. The present study tests this hypothesis by presenting groups of residents and clinical clerks with a series of simulated patient problems in which content was systematically varied. Each subject also completed a multiple choice test with questions linked to each diagnosis presented in the clinical problems. Three of the four problem‐solving scores showed low correlations, even to two presentations of the same problem, and no relationship to content differences. None of the scores were related to performance on the multiple choice test. The results suggest that variability in problem‐solving scores is related to factors other than content knowledge, and several possibilities are discussed.

[1]  S. Case A new examination for the evaluation of diagnostic problem-solving. , 1981, Annual Conference on Research in Medical Education. Conference on Research in Medical Education.

[2]  Case Sm A new examination for the evaluation of diagnostic problem-solving. , 1981 .

[3]  E S Berner,et al.  An indication for a process dimension in medical problem‐solving , 1977, Medical education.

[4]  Paul J. Feltovich,et al.  Categorization and Representation of Physics Problems by Experts and Novices , 1981, Cogn. Sci..

[5]  Christine H. McGuire,et al.  SIMULATION TECHNIQUE IN THE MEASUREMENT OF PROBLEM‐SOLVING SKILLS1 , 1967 .

[6]  G R Norman,et al.  Clinical experience and the structure of memory. , 1979, Annual Conference on Research in Medical Education. Conference on Research in Medical Education.

[7]  H. Simon,et al.  Perception in chess , 1973 .

[8]  S. Dinham,et al.  Reliability and validity of simulated problems as measures of change in problem-solving skills. , 1977, Annual Conference on Research in Medical Education. Conference on Research in Medical Education.

[9]  J. Marshall Assessment of problem‐solving ability , 1977, Medical education.

[10]  P Tugwell,et al.  A comparison of resident performance on real and simulated patients. , 1982, Journal of medical education.

[11]  L. Cronbach,et al.  THEORY OF GENERALIZABILITY: A LIBERALIZATION OF RELIABILITY THEORY† , 1963 .

[12]  C. F. Schumacher Validation of the American Board of Internal Medicine written examination. A study of the examination as a measure of achievement in graduate medical education. , 2020, Annals of internal medicine.

[13]  R. Glaser Education and Thinking: The Role of Knowledge. , 1984 .

[14]  G. Norman,et al.  Objective measurement of clinical performance , 1985, Medical education.

[15]  Robinson Sa,et al.  Reliability and validity of simulated problems as measures of change in problem-solving skills. , 1977 .

[16]  S. Mazzuca,et al.  Evaluating clinical knowledge across years of medical training. , 1981, Journal of medical education.

[17]  G. Webster,et al.  A comparison of several methods for scoring patient management problems. , 1983, Proceedings of the ... annual Conference on Research in Medical Education. Conference on Research in Medical Education.

[18]  Donald R. Wilson,et al.  A Preliminary Investigation of Computerized Patient Management Problems in Relation to Other Examinations , 1979 .

[19]  M M Waldrop The Necessity of Knowledge: The essence of intelligence seems to be less a matter of reasoning ability than of knowing a lot about the world. , 1984, Science.

[20]  Philip L. Smith GAINING ACCURACY IN GENERALIZABILITY THEORY: USING MULTIPLE DESIGNS , 1981 .

[21]  R. E. Helfer,et al.  Measuring the process of solving clinical diagnostic problems , 1971, British journal of medical education.