Consensus interpretation in imaging research: is there a better way?

We believe that our readers are interested in investigations describing physicians’ performance of specific techniques under reasonably realistic conditions. The simulation of such realistic conditions, however, does not only require thorough reporting of variability between observers and techniques; it also requires a sufficiently high number of observers.

[1]  Kunio Doi,et al.  Experimental design and data analysis in receiver operating characteristic studies: lessons learned from reports in radiology from 1997 to 2006. , 2009, Radiology.

[2]  N. Obuchowski How many observers are needed in clinical studies of medical imaging? , 2004, AJR. American journal of roentgenology.

[3]  Jeffrey G Jarvik,et al.  Moderate versus mediocre: the reliability of spine MR data interpretations. , 2009, Radiology.

[4]  J. Bartlett,et al.  Reliability, repeatability and reproducibility: analysis of measurement errors in continuous variables , 2008, Ultrasound in obstetrics & gynecology : the official journal of the International Society of Ultrasound in Obstetrics and Gynecology.

[5]  M E Baker,et al.  Answering unanswered questions: proposal for a shared resource in clinical diagnostic radiology research. , 1992, Radiology.

[6]  W. Mower,et al.  Evaluating bias and variability in diagnostic test reports. , 1999, Annals of emergency medicine.

[7]  Thomas McGinn,et al.  Tips for learners of evidence-based medicine: 3. Measures of observer variability (kappa statistic) , 2004, Canadian Medical Association Journal.

[8]  A R Feinstein,et al.  Use of methodological standards in diagnostic test research. Getting better but still not good. , 1995, JAMA.

[9]  R C Zepp,et al.  Simple steps for improving multiple-reader studies in radiology. , 1996, AJR. American journal of roentgenology.

[10]  Harold L. Kundel,et al.  Measurement of Observer Agreement Measurement of Agreement of Two Readers , 2003 .

[11]  Peter M. Rothwell,et al.  Analysis of agreement between measurements of continuous variables: general principles and lessons from studies of imaging of carotid stenosis , 2000, Journal of Neurology.

[12]  B. Hillman,et al.  ACRIN—lessons learned in conducting multi-center trials of imaging and cancer , 2005, Cancer imaging : the official publication of the International Cancer Imaging Society.

[13]  James N Weinstein,et al.  Lumbar spine: reliability of MR imaging findings. , 2009, Radiology.

[14]  Constantine A Gatsonis,et al.  When is the right time to conduct a clinical trial of a diagnostic imaging technology? , 2008, Radiology.

[15]  D A Turner Observer variability: what to do until perfect diagnostic tests are invented. , 1978, Journal of nuclear medicine : official publication, Society of Nuclear Medicine.

[16]  Alexander A. Bankier,et al.  Submissions to Radiology: Our Top 10 List of Statistical Errors , 2009 .

[17]  The The Evidence-Based Radiology Workin Evidence-based radiology: a new approach to the practice of radiology. , 2001, Radiology.