Binary and multi-category ratings in a laboratory observer performance study: a comparison.

The authors investigated radiologists, performances during retrospective interpretation of screening mammograms when using a binary decision whether to recall a woman for additional procedures or not and compared it with their receiver operating characteristic (ROC) type performance curves using a semi-continuous rating scale. Under an Institutional Review Board approved protocol nine experienced radiologists independently rated an enriched set of 155 examinations that they had not personally read in the clinic, mixed with other enriched sets of examinations that they had individually read in the clinic, using both a screening BI-RADS rating scale (recall/not recall) and a semi-continuous ROC type rating scale (0 to 100). The vertical distance, namely the difference in sensitivity levels at the same specificity levels, between the empirical ROC curve and the binary operating point were computed for each reader. The vertical distance averaged over all readers was used to assess the proximity of the performance levels under the binary and ROC-type rating scale. There does not appear to be any systematic tendency of the readers towards a better performance when using either of the two rating approaches, namely four readers performed better using the semi-continuous rating scale, four readers performed better with the binary scale, and one reader had the point exactly on the empirical ROC curve. Only one of the nine readers had a binary "operating point" that was statistically distant from the same reader's empirical ROC curve. Reader-specific differences ranged from -0.046 to 0.128 with an average width of the corresponding 95% confidence intervals of 0.2 and p-values ranging for individual readers from 0.050 to 0.966. On average, radiologists performed similarly when using the two rating scales in that the average distance between the run in individual reader's binary operating point and their ROC curve was close to zero. The 95% confidence interval for the fixed-reader average (0.016) was (-0.0206, 0.0631) (two-sided p-value 0.35). In conclusion the authors found that in retrospective observer performance studies the use of a binary response or a semi-continuous rating scale led to consistent results in terms of performance as measured by sensitivity-specificity operating points.

[1]  H E Rockette,et al.  Effect of observer instruction on ROC study of chest images. , 1990, Investigative radiology.

[2]  R. F. Wagner,et al.  Assessment of medical imaging systems and computer aids: a tutorial review. , 2007, Academic radiology.

[3]  J. Goodwin,et al.  Variation in false-positive rates of mammography reading among 1067 radiologists: a population-based assessment , 2006, Breast Cancer Research and Treatment.

[4]  Dev P Chakraborty,et al.  Observer studies involving detection and localization: modeling, analysis, and validation. , 2004, Medical physics.

[5]  Kevin S Berbaum,et al.  An empirical comparison of discrete ratings and subjective probability ratings. , 2002, Academic radiology.

[6]  H E Rockette,et al.  Receiver operating characteristic analysis of chest image interpretation with conventional, laser-printed, and high-resolution workstation images. , 1990, Radiology.

[7]  Andriy I. Bandos,et al.  On comparing methods for discriminating between actually negative and actually positive subjects with FROC type data. , 2008, Medical physics.

[8]  H E Rockette,et al.  The use of continuous and discrete confidence judgments in receiver operating characteristic studies of diagnostic imaging techniques. , 1992, Investigative radiology.

[9]  H E Rockette,et al.  Empiric assessment of parameters that affect the design of multireader receiver operating characteristic studies. , 1999, Academic radiology.

[10]  D. Vanel The American College of Radiology (ACR) Breast Imaging and Reporting Data System (BI-RADS): a step towards a universal radiological language? , 2007, European journal of radiology.

[11]  Gregory T Sica,et al.  Bias in research studies. , 2006, Radiology.

[12]  David Gur,et al.  "Binary" and "non-binary" detection tasks: are current performance measures optimal? , 2007, Academic radiology.

[13]  John Eng,et al.  Receiver operating characteristic analysis: a primer. , 2005, Academic radiology.

[14]  C. Rutter,et al.  Assessing mammographers' accuracy. A comparison of clinical and test performance. , 2000, Journal of clinical epidemiology.

[15]  D P Chakraborty A search model and figure of merit for observer data acquired according to the free-response paradigm. , 2006, Physics in medicine and biology.

[16]  K. Zou,et al.  Ovarian cancer: comparison of observer performance for four methods of interpreting CT scans. , 1999, Radiology.

[17]  C A Beam,et al.  A statistical method for the comparison of a discrete diagnostic test with several continuous diagnostic tests. , 1991, Biometrics.

[18]  Fabrice Carrat,et al.  Detection of lung cancer on radiographs: receiver operating characteristic analyses of radiologists', pulmonologists', and anesthesiologists' performance. , 2004, Radiology.

[19]  E. Yucel,et al.  Signal characteristics of focal liver lesions on double echo T2-weighted conventional spin echo MRI: observer performance versus quantitative measurements of T2 relaxation times. , 2000, Journal of computer assisted tomography.

[20]  Takeshi Nakaura,et al.  Pulmonary nodules: estimation of malignancy at thin-section helical CT--effect of computer-aided diagnosis on performance of radiologists. , 2006, Radiology.