High-fidelity patient simulation: validation of performance checklists.

BACKGROUND Standardized scenarios can be used for performance assessments geared to the level of the learner. The purpose of this study was to validate checklists used for the assessments of medical students' performance using high-fidelity patient simulation. METHODS Our undergraduate committee designed 10 scenarios based on curriculum objectives. Fifteen faculty members with undergraduate educational experience identified items considered appropriate for medical students' performance level and identified items that, if omitted, would negatively affect grades. Items endorsed by less than 20% of faculty were omitted. For remaining items, weighting was calculated according to faculty responses. Students managed at least one scenario during which their performance was videotaped. Two raters independently completed the checklists for three consecutive sessions to determine inter-rater reliability. Validity was determined using Cronbach's alpha with an alpha>or=0.6 and <or=0.9 considered acceptable internal consistency. Item analysis was performed by recalculating Cronbach's alpha with each item deleted to determine if that item contributed to a low internal consistency. RESULTS 135 students participated in the study. Inter-rater reliability of the two raters determined on the third session was 0.97 and therefore one rater completed the remaining performance assessments. Cronbach's alpha for the 10 scenarios ranged from 0.16 to 0.93 with two scenarios demonstrating acceptable internal consistency with all items. Three scenarios demonstrated acceptable internal consistency with one item deleted. CONCLUSIONS Five scenarios developed for this study were shown to be valid when using the faculty criteria for expected performance level.

[1]  Doreen Cleave-Hogg,et al.  A worldwide survey of the use of simulation in anesthesia , 2002, Canadian journal of anaesthesia = Journal canadien d'anesthesie.

[2]  Patricia M. Murphy,et al.  Testing the raters: inter-rater reliability of standardized anaesthesia simulator performance , 1997, Canadian journal of anaesthesia = Journal canadien d'anesthesie.

[3]  A. Byrne,et al.  Responses to simulated anaesthetic emergencies by anaesthetists with different durations of clinical experience. , 1997, British journal of anaesthesia.

[4]  R. Reznick,et al.  Comparing the psychometric properties of checklists and global rating scales for assessing performance on an OSCE‐format examination , 1998, Academic medicine : journal of the Association of American Medical Colleges.

[5]  A. Rothman,et al.  Validity and Generalizability of Global Ratings in an Objective Structured Clinical Examination , 1991, Academic medicine : journal of the Association of American Medical Colleges.

[6]  G. Regehr,et al.  Validation of an objective structured clinical examination in psychiatry , 1998, Academic medicine : journal of the Association of American Medical Colleges.

[7]  P. Grand'maison,et al.  Content validity of the Quebec licensing examination (OSCE). Assessed by practising physicians. , 1996, Canadian family physician Medecin de famille canadien.

[8]  Jodi Herold,et al.  Validity and reliability of undergraduate performance assessments in an anesthesia simulator , 2001, Canadian journal of anaesthesia = Journal canadien d'anesthesie.

[9]  K. Fish,et al.  Testing Internal Consistency and Construct Validity During Evaluation of Performance in a Patient Simulator , 1998, Anesthesia and analgesia.

[10]  G. Regehr,et al.  An objective structured clinical examination for evaluating psychiatric clinical clerks , 1997, Academic medicine : journal of the Association of American Medical Colleges.

[11]  M. Cohen,et al.  The Validity of Performance Assessments Using Simulation , 2001, Anesthesiology.