Programmatic assessment: Can we provide evidence for saturation of information?

Abstract Purpose: According to the principles of programmatic assessment, a valid high-stakes assessment of the students’ performance should amongst others, be based on a multiple data points, supposedly leading to saturation of information. Saturation of information is generated when a data point does not add important information to the assessor. In establishing saturation of information, institutions often set minimum requirements for the number of assessment data points to be included in the portfolio. Methods: In this study, we aimed to provide validity evidence for saturation of information by investigating the relationship between the number of data points exceeding the minimum requirements in a portfolio and the consensus between two independent assessors. Data were analyzed using a multiple logistic regression model. Results: The results showed no relation between the number of data points and the consensus. This suggests that either the consensus is predicted by other factors only, or, more likely, that assessors already reached saturation of information. This study took the first step in investigating saturation of information, further research is necessary to gain in-depth insights of this matter in relation to the complex process of decision-making.

[1]  Annemarie Spruijt,et al.  Best abstracts of the NVMO conference , 2013, Perspectives on Medical Education.

[2]  C P M Van Der Vleuten,et al.  Twelve Tips for programmatic assessment , 2015, Medical teacher.

[3]  C. V. D. van der Vleuten,et al.  Programmatic assessment and Kane’s validity perspective , 2012, Medical education.

[4]  C. V. D. van der Vleuten,et al.  Development and validation of a competency framework for veterinarians. , 2011, Journal of veterinary medical education.

[5]  Cees P M van der Vleuten,et al.  The use of programmatic assessment in the clinical workplace: A Maastricht case report , 2012, Medical teacher.

[6]  J. R. Landis,et al.  The measurement of observer agreement for categorical data. , 1977, Biometrics.

[7]  Composite reliability of a workplace-based assessment toolbox for postgraduate medical education , 2013, Advances in health sciences education : theory and practice.

[8]  J. Frank,et al.  The CanMEDS initiative: implementing an outcomes-based framework of physician competencies , 2007, Medical teacher.

[9]  C. V. D. van der Vleuten,et al.  Students’ motivation toward feedback-seeking in the clinical workplace , 2017, Medical teacher.

[10]  C. V. D. van der Vleuten,et al.  Programmatic assessment of competency-based workplace learning: when theory meets practice , 2013, BMC Medical Education.

[11]  Gregory Mahr Validation , 2019, Academic Psychiatry.

[12]  Zhongheng Zhang,et al.  Variable selection with stepwise and best subset approaches. , 2016, Annals of translational medicine.

[13]  C. V. D. van der Vleuten,et al.  A model for programmatic assessment fit for purpose , 2012, Medical teacher.

[14]  D. Long,et al.  Competency-based medical education: theory to practice , 2010, Medical teacher.

[15]  C. V. D. van der Vleuten,et al.  The assessment of professional competence: building blocks for theory development. , 2010, Best practice & research. Clinical obstetrics & gynaecology.

[16]  C. V. D. van der Vleuten,et al.  Assessing professional competence : from methods to programmes , 2005 .

[17]  Cees P. M. van der Vleuten,et al.  Programmatic assessment and Kane's validity perspective. , 2012 .

[18]  Cees P. M. van der Vleuten,et al.  Assessing professional competence: from methods to programmes , 2005 .

[19]  J. Miles,et al.  Applying regression & correlation : a guide for students and researchers , 2001 .

[20]  L W T Schuwirth,et al.  When enough is enough: a conceptual basis for fair and defensible practice performance assessment , 2002, Medical education.

[21]  R. Hatala,et al.  A contemporary approach to validity arguments: a practical guide to Kane's framework , 2015, Medical education.

[22]  From aggregation to interpretation: how assessors judge complex data in a competency-based portfolio , 2017, Advances in health sciences education : theory and practice.

[23]  Jonathan Sherbino,et al.  The role of assessment in competency-based medical education , 2010, Medical teacher.

[24]  Val Wass,et al.  Portfolios in medical education: why do they meet with mixed success? A systematic review , 2007, Medical education.

[25]  J. Frank,et al.  Implementing competency-based medical education: Moving forward , 2017, Medical teacher.

[26]  N. de Jong,et al.  To what extent can PBL principles be applied in blended learning: Lessons learned from health master programs , 2017, Medical teacher.

[27]  R Core Team,et al.  R: A language and environment for statistical computing. , 2014 .

[28]  Timothy F Chen,et al.  Interrater agreement and interrater reliability: key concepts, approaches, and applications. , 2013, Research in social & administrative pharmacy : RSAP.