Cohen’s weighted kappa with additive weights
暂无分享,去创建一个
[1] L. Kurland,et al. Studies on multiple sclerosis in Winnepeg, Manitoba, and New Orleans, Louisiana. I. Prevalence; comparison between the patient groups in Winnipeg and New Orleans. , 1953, American journal of hygiene.
[2] Janis E. Johnston,et al. Weighted Kappa for Multiple Raters , 2008, Perceptual and motor skills.
[3] Adelin Albert,et al. A note on the linearly weighted kappa coefficient for ordinal scales , 2009 .
[4] H. Brenner,et al. Dependence of Weighted Kappa Coefficients on the Number of Categories , 1996, Epidemiology.
[5] Kenneth J. Berry,et al. A note on Cohen’s weighted kappa coefficient of agreement with linear weights , 2009 .
[6] Matthijs J. Warrens,et al. Some Paradoxical Results for the Quadratically Weighted Kappa , 2012 .
[7] Matthijs J. Warrens,et al. Equivalences of weighted kappas for multiple raters , 2012 .
[8] T. Allison,et al. A New Procedure for Assessing Reliability of Scoring EEG Sleep Recordings , 1971 .
[9] Matthijs J. Warrens,et al. Cohen's kappa can always be increased and decreased by combining categories , 2010 .
[10] R D Sperduto,et al. Evaluation of an iris color classification system. The Eye Disorders Case-Control Study Group. , 1990, Investigative ophthalmology & visual science.
[11] Matthijs J. Warrens. Cohen’s linearly weighted kappa is a weighted average , 2012, Adv. Data Anal. Classif..
[12] G. Teasdale,et al. Structured interviews for the Glasgow Outcome Scale and the extended Glasgow Outcome Scale: guidelines for their use. , 1998, Journal of neurotrauma.
[13] Jacob Cohen,et al. The Equivalence of Weighted Kappa and the Intraclass Correlation Coefficient as Measures of Reliability , 1973 .
[14] Albert Westergren,et al. Statistical methods for assessing agreement for ordinal data. , 2005, Scandinavian journal of caring sciences.
[15] J. Miller,et al. Glasgow Outcome Scale: an inter-rater reliability study. , 1993, Brain injury.
[16] Hubert J. A. Schouten,et al. Nominal scale agreement among observers , 1986 .
[17] Jacob Cohen. A Coefficient of Agreement for Nominal Scales , 1960 .
[18] L. Hsu,et al. Interrater Agreement Measures: Comments on Kappan, Cohen's Kappa, Scott's π, and Aickin's α , 2003 .
[19] W. Willett,et al. Misinterpretation and misuse of the kappa statistic. , 1987, American journal of epidemiology.
[20] Harold L. Kundel,et al. Measurement of Observer Agreement Measurement of Agreement of Two Readers , 2003 .
[21] Matthijs J. Warrens,et al. Cohen’s Linearly Weighted Kappa is a Weighted Average of 2×2 Kappas , 2011 .
[22] D V Cicchetti,et al. Assessing Inter-Rater Reliability for Rating Scales: Resolving some Basic Issues , 1976, British Journal of Psychiatry.
[23] Christof Schuster,et al. A Note on the Interpretation of Weighted Kappa and its Relations to Other Rater Agreement Statistics for Metric Scales , 2004 .
[24] P. Graham,et al. The analysis of ordinal agreement data: beyond weighted kappa. , 1993, Journal of clinical epidemiology.
[25] J. R. Landis,et al. The measurement of observer agreement for categorical data. , 1977, Biometrics.
[26] Art Noda,et al. Kappa coefficients in medical research , 2002, Statistics in medicine.
[27] Jacob Cohen,et al. Weighted kappa: Nominal scale agreement provision for scaled disagreement or partial credit. , 1968 .
[28] Robert J. Glynn,et al. Evaluation of an Iris Color Classification System , 2005 .
[29] B. Everitt,et al. Large sample standard errors of kappa and weighted kappa. , 1969 .