T wice a year, training programs must report milestones for every resident to the Accreditation Council for Graduate Medical Education (ACGME). The ACGME lists possible resident progress assessment methods to inform the milestones, but many are subjective. In addition, the ACGME surveys residents to verify that programs give trainees feedback on their performance, as well as their personal clinical effectiveness. In an effort to make feedback in the latter dimension reliable and meaningful, program directors are searching for and devising systems to give objective, unbiased clinical performance data. The ability to gather and report process and outcome data via automated systems (eg, electronic health records, registries, and billing data) in medical practice is relatively new, and educators should be aware of the complexities. Obtaining structured, objective clinical performance feedback data can be a challenge. Some groups provide automatic feedback of clinical performance data on measures like proper antibiotic administration and incidence of complications. Unfortunately, the authors of 1 study were unable to find a correlation between the level of training and the performance on these metrics, or any longitudinal improvement in the metrics for a given resident over time. As departments collect data for quality and milestone reporting, this should allow them to parse the data to the level of individual residents. The temptation to use these data to ‘‘get some numbers,’’ to meaningfully fulfill the feedback requirement, may become significant. This secondary use of patient data from electronic health records, billing, and other sources to understand individual provider performance is still in its infancy, and data can easily be misinterpreted and misused. Accuracy and transparency must be considered before providing residents with data gathered for other purposes, and particularly before using it for competency determinations. When devising policies to use data gathered for other purposes to evaluate resident clinical performance, program directors should be prepared to answer the following questions. 1. How can we be sure that these data reflect a specific resident’s patients?
[1]
Jesse M. Ehrenfeld,et al.
Automated Near–Real-time Clinical Performance Feedback for Anesthesiology Residents: One Piece of the Milestones Puzzle
,
2014,
Anesthesiology.
[2]
Frank Eijkenaar,et al.
Key issues in the design of pay for performance programs
,
2011,
The European Journal of Health Economics.
[3]
S. Pearson,et al.
Measuring the performance of individual physicians by collecting data from multiple health plans: the results of a two-state test.
,
2011,
Health affairs.
[4]
J. Richman,et al.
The Importance of Measuring Competency-Based Outcomes: Standard Evaluation Measures Are Not Surrogates for Clinical Performance of Internal Medicine Residents
,
2009,
Teaching and learning in medicine.
[5]
S. Rosenbaum,et al.
Fair process in physician performance rating systems: Overview and analysis of Colorado's Physician Designation Disclosure Act
,
2009
.
[6]
Constance K Haan,et al.
A model to begin to use clinical outcomes in medical education.
,
2008,
Academic medicine : journal of the Association of American Medical Colleges.
[7]
Jeroan J Allison,et al.
Implementing Achievable Benchmarks in Preventive Health: A Controlled Trial in Residency Education
,
2006,
Academic medicine : journal of the Association of American Medical Colleges.
[8]
Thomas R Belin,et al.
Physician Performance Assessment: Nonequivalence of Primary Care Measures
,
2003,
Medical care.
[9]
S. Nashef,et al.
The logistic EuroSCORE
,
2003
.