High agreement but low kappa: II. Resolving the paradoxes.

[1]  A. Feinstein,et al.  High agreement but low kappa: I. The problems of two paradoxes. , 1990, Journal of clinical epidemiology.

[2]  C. Wells,et al.  The value and hazards of standardization in clinical epidemiologic research. , 1988, Journal of clinical epidemiology.

[3]  J. Fleiss,et al.  Quantification of agreement in psychiatric diagnosis revisited. , 1987, Archives of general psychiatry.

[4]  E. Spitznagel,et al.  A proposed solution to the base rate problem in the kappa statistic. , 1985, Archives of general psychiatry.

[5]  Helena C. Kraemer,et al.  Estimating false alarms and missed events from interobserver agreement: Comment on Kaye. , 1982 .

[6]  L. A. Goodman,et al.  Measures of association for cross classifications , 1979 .

[7]  A E Maxwell,et al.  Coefficients of Agreement Between Observers and Their Interpretation , 1977, British Journal of Psychiatry.

[8]  Ian Burn,et al.  VALIDITY OF CLINICAL EXAMINATION AND MAMMOGRAPHY AS SCREENING TESTS FOR BREAST CANCER , 1975, The Lancet.

[9]  J. Fleiss Measuring agreement between two judges on the presence or absence of a trait. , 1975, Biometrics.

[10]  Letter: Protein requirement. , 1974, Lancet.

[11]  E. Rogot,et al.  A proposed index for measuring agreement in test-retest studies. , 1966, Journal of chronic diseases.

[12]  P. Armitage,et al.  The Measurement of Observer Disagreement in the Recording of Signs , 1966 .

[13]  W. Youden,et al.  Index for rating diagnostic tests , 1950, Cancer.

[14]  G. Yule On the Methods of Measuring Association between Two Attributes , 1912 .