Reader agreement studies.

This is the 17th in the series designed by the American College of Radiology (ACR), the Canadian Association of Radiologists, and the American Journal of Roentgenology. The series, which will ultimately comprise 22 articles, is designed to progressively educate radiologists in the methodologies of rigorous clinical research, from the most basic principles to a level of considerable sophistication. The articles are intended to complement interactive software that permits the user to work with what he or she has learned, which is available on the ACR Web site (www.acr.org).

[1]  A Donner,et al.  Sample size requirements for the comparison of two or more coefficients of inter-observer agreement. , 1998, Statistics in medicine.

[2]  A. Feinstein,et al.  High agreement but low kappa: I. The problems of two paradoxes. , 1990, Journal of clinical epidemiology.

[3]  Jiun-Kae Jack Lee,et al.  A Better Confidence Interval for Kappa (κ) on Measuring Agreement between Two Raters with Binary Outcomes , 1994 .

[4]  Myer Goldman,et al.  College of radiology , 1969 .

[5]  J. Elmore,et al.  Variability in radiologists' interpretations of mammograms. , 1994, The New England journal of medicine.

[6]  E A Sickles,et al.  Dynamic high-spatial-resolution MR imaging of suspicious breast lesions: diagnostic criteria and interobserver variability. , 2000, AJR. American journal of roentgenology.

[7]  J. Fleiss Statistical methods for rates and proportions , 1974 .

[8]  W. Kaiser,et al.  Development, standardization, and testing of a lexicon for reporting contrast‐enhanced breast magnetic resonance imaging studies , 2001, Journal of magnetic resonance imaging : JMRI.

[9]  C. Beam,et al.  Effect of human variability on independent double reading in screening mammography. , 1996, Academic radiology.

[10]  W. Grove Statistical Methods for Rates and Proportions, 2nd ed , 1981 .

[11]  L. Cyr,et al.  Measures of clinical agreement for nominal and categorical data: the kappa coefficient. , 1992, Computers in biology and medicine.

[12]  Rebecca S Lewis,et al.  Does training in the Breast Imaging Reporting and Data System (BI-RADS) improve biopsy recommendations or feature analysis agreement with experienced breast imagers at mammography? , 2002, Radiology.

[13]  Helena Chmura Kraemer,et al.  Evaluating Medical Tests: Objective and Quantitative Guidelines , 1992 .

[14]  Jacob Cohen,et al.  Weighted kappa: Nominal scale agreement provision for scaled disagreement or partial credit. , 1968 .

[15]  J. R. Landis,et al.  The measurement of observer agreement for categorical data. , 1977, Biometrics.

[16]  Calculating power for the comparison of dependent κ‐coefficients , 2003 .

[17]  Kashner Tm Agreement between administrative files and written medical records: a case of the Department of Veterans Affairs. , 1998 .

[18]  H. Kundel,et al.  Measurement of observer agreement. , 2003, Radiology.

[19]  Robert F. Woolson,et al.  Statistical Methods for the Analysis of Biomedical Data. , 1990 .

[20]  J. R. Landis,et al.  A one-way components of variance model for categorical data , 1977 .