A METHOD and RESOURCES for ASSESSING the Reliability of Simulation Evaluation Instruments

Aim. This article describes a successfully piloted method for facilitating rapid psychometric assessments of three simulation evaluation instruments: the Lasater Clinical Judgment Rubric, the Seattle University Evaluation Tool, and the Creighton‐Simulation Evaluation Instrument™. Background. To provide valid and reliable evaluations of student performance in simulation activities, it is important to assess the psychometric properties of evaluation instruments. Method. This novel method incorporates the use of a database of validated, video‐archived simulations depicting nursing students performing at varying levels of proficiency. A widely dispersed sample of 29 raters viewed and scored multiple scenarios over a six‐week period. Analyses are described including inter‐ and intrarater reliability, internal consistency, and validity assessments. Results and Conclusion. Descriptive and inferential statistics supported the validity of the leveled scenarios. The inter‐ and intrarater reliability and internal consistencies of data from the three tools are provided. The article provides information and resources for readers to access in order to assess their own simulation evaluation instruments using the described methods.

[1]  M. Oermann,et al.  Clinical evaluation and grading practices in schools of nursing: national survey findings part II. , 2009, Nursing education perspectives.

[2]  M. Oermann,et al.  ASSESSMENT AND GRADING PRACTICES in Schools of Nursing: National Survey Findings Part I , 2009, Nursing education perspectives.

[3]  Martha Todd,et al.  The Development of a Quantitative Evaluation Tool for Simulations in Nursing Education , 2008, International journal of nursing education scholarship.

[4]  Karen J. Panzarella,et al.  Using the Integrated Standardized Patient Examination to Assess Clinical Competence in Physical Therapist Students , 2008 .

[5]  K. Lasater Clinical judgment development: using simulation to create an assessment rubric. , 2007, The Journal of nursing education.

[6]  Joanne K Olson,et al.  Taking the Patient to the Classroom: Applying Theoretical Frameworks to Simulation in Nursing Education , 2007, International journal of nursing education scholarship.

[7]  C. Tanner,et al.  Thinking like a nurse: a research-based model of clinical judgment in nursing. , 2006, The Journal of nursing education.

[8]  P. Rowe,et al.  Establishing the reliability of Mobility Milestones as an outcome measure for stroke. , 2003, Archives of physical medicine and rehabilitation.

[9]  Theo Gasser,et al.  Assessing intrarater, interrater and test–retest reliability of continuous measurements , 2002, Statistics in medicine.

[10]  P. Ironside,et al.  Developing a science of nursing education: innovation with research. , 2002, The Journal of nursing education.

[11]  B. Stewart,et al.  A new look for measurement validity. , 1997, The Journal of nursing education.

[12]  J. Fleiss,et al.  Intraclass correlations: uses in assessing rater reliability. , 1979, Psychological bulletin.

[13]  K. Adamson,et al.  A Review of Currently Published Evaluation Instruments for Human Patient Simulation , 2010 .

[14]  S. Bennett,et al.  Reliability of the Dynamic Gait Index in individuals with multiple sclerosis. , 2005, Archives of physical medicine and rehabilitation.

[15]  P. Jeffries Technology trends in nursing education: next steps. , 2005, The Journal of nursing education.