Identifying minimally acceptable interpretive performance criteria for screening mammography.

PURPOSE To develop criteria to identify thresholds for minimally acceptable physician performance in interpreting screening mammography studies and to profile the impact that implementing these criteria may have on the practice of radiology in the United States. MATERIALS AND METHODS In an institutional review board-approved, HIPAA-compliant study, an Angoff approach was used in two phases to set criteria for identifying minimally acceptable interpretive performance at screening mammography as measured by sensitivity, specificity, recall rate, positive predictive value (PPV) of recall (PPV(1)) and of biopsy recommendation (PPV(2)), and cancer detection rate. Performance measures were considered separately. In phase I, a group of 10 expert radiologists considered a hypothetical pool of 100 interpreting physicians and conveyed their cut points of minimally acceptable performance. The experts were informed that a physician's performance falling outside the cut points would result in a recommendation to consider additional training. During each round of scoring, all expert radiologists' cut points were summarized into a mean, median, mode, and range; these were presented back to the group. In phase II, normative data on performance were shown to illustrate the potential impact cut points would have on radiology practice. Rescoring was done until consensus among experts was achieved. Simulation methods were used to estimate the potential impact of performance that improved to acceptable levels if effective additional training was provided. RESULTS Final cut points to identify low performance were as follows: sensitivity less than 75%, specificity less than 88% or greater than 95%, recall rate less than 5% or greater than 12%, PPV(1) less than 3% or greater than 8%, PPV(2) less than 20% or greater than 40%, and cancer detection rate less than 2.5 per 1000 interpretations. The selected cut points for performance measures would likely result in 18%-28% of interpreting physicians being considered for additional training on the basis of sensitivity and cancer detection rate, while the cut points for specificity, recall, and PPV(1) and PPV(2) would likely affect 34%-49% of practicing interpreters. If underperforming physicians moved into the acceptable range, detection of an additional 14 cancers per 100000 women screened and a reduction in the number of false-positive examinations by 880 per 100000 women screened would be expected. CONCLUSION This study identified minimally acceptable performance levels for interpreters of screening mammography studies. Interpreting physicians whose performance falls outside the identified cut points should be reviewed in the context of their specific practice settings and be considered for additional training.

[1]  A. Verbeek,et al.  League Tables of Breast Cancer Screening Units: Worst-case and Best-case Scenario Ratings Helped in Exposing Real Differences between Performance Ratings , 2009, Journal of medical screening.

[2]  Rebecca S Lewis,et al.  A portrait of breast imaging specialists and of the interpretation of mammography in the United States. , 2006, AJR. American journal of roentgenology.

[3]  Karla Kerlikowske,et al.  Performance benchmarks for screening mammography. , 2006, Radiology.

[4]  Sanju George,et al.  Standard setting: Comparison of two methods , 2006, BMC medical education.

[5]  K. Ricker Setting Cut-Scores: A Critical Review of the Angoff and Modified Angoff Methods , 2006 .

[6]  K. Boursicot,et al.  Setting Standards in a Professional Higher Education Course: Defining the Concept of the Minimally Competent Student in Performance-Based Assessment at the Level of Graduation from Medical School , 2006 .

[7]  S. Haist,et al.  A Model for Setting Performance Standards for Standardized Patient Examinations , 2003, Evaluation & the health professions.

[8]  J. Norcini,et al.  Setting standards on educational tests , 2003, Medical education.

[9]  D. Miglioretti,et al.  Individual and Combined Effects of Age, Breast Density, and Hormone Replacement Therapy Use on the Accuracy of Screening Mammography , 2003, Annals of Internal Medicine.

[10]  D. Wolverton,et al.  Performance parameters for screening and diagnostic mammography: specialist and general radiologists. , 2002, Radiology.

[11]  Robert B. Jaffe Performance of Screening Mammography Among Women With and Without a First-Degree Relative With Breast Cancer , 2001 .

[12]  A. Tosteson,et al.  Mammography in 53,803 women from the New Hampshire mammography network. , 2000, Radiology.

[13]  C. Beam,et al.  Variability in the interpretation of screening mammograms by US radiologists. Findings from a national sample. , 1996, Archives of internal medicine.

[14]  Jo M. Kendrick,et al.  Quality Determinants of Mammography, Clinical Practice Guideline , 1995 .

[15]  J. Elmore,et al.  Variability in radiologists' interpretations of mammograms. , 1994, The New England journal of medicine.