The Implementation and Evolution of STAR/CIF Ontologies: Interoperability and Preservation of Structured Data
暂无分享,去创建一个
[1] Jacob Cohen,et al. Weighted kappa: Nominal scale agreement provision for scaled disagreement or partial credit. , 1968 .
[2] L. A. Goodman. Simple Models for the Analysis of Association in Cross-Classifications Having Ordered Categories , 1979 .
[3] S. Bangdiwala,et al. Interpretation of Kappa and B statistics measures of agreement , 1997 .
[4] M P Becker,et al. Log-linear modelling of pairwise interobserver agreement on a categorical scale. , 1992, Statistics in medicine.
[5] L. Kurland,et al. Studies on multiple sclerosis in Winnipeg, Manitoba, and New Orleans, Louisiana. II. A controlled investigation of factors in the life history of the Winnipeg patients. , 1953, American journal of hygiene.
[6] J. Fleiss. Measuring nominal scale agreement among many raters. , 1971 .
[7] A. Feinstein,et al. High agreement but low kappa: II. Resolving the paradoxes. , 1990, Journal of clinical epidemiology.
[8] Estimation of symmetric disagreement using a uniform association model for ordinal agreement data , 2009 .
[9] K. Gwet. Computing inter-rater reliability and its variance in the presence of high agreement. , 2008, The British journal of mathematical and statistical psychology.
[10] Jacob Cohen. A Coefficient of Agreement for Nominal Scales , 1960 .
[11] A. J. Conger. Integration and generalization of kappas for multiple raters. , 1980 .
[12] Bayo Lawal,et al. Categorical Data Analysis With Sas and Spss Applications , 2003 .
[13] J. R. Landis,et al. An application of hierarchical kappa-type statistics in the assessment of majority agreement among multiple observers. , 1977, Biometrics.
[14] Rebecca Zwick,et al. Another look at interrater agreement. , 1988, Psychological bulletin.
[15] Agreement models for multiraters , 2011 .
[16] Ulf Olsson,et al. A Measure of Agreement for Interval or Nominal Multivariate Observations , 2001 .
[17] Adelin Albert,et al. A note on the linearly weighted kappa coefficient for ordinal scales , 2009 .
[18] M. Banerjee,et al. Beyond kappa: A review of interrater agreement measures , 1999 .
[19] J. Mary,et al. Log‐linear non‐uniform association models for agreement between two ratings on an ordinal scale , 2007, Statistics in medicine.
[20] D. Weiss,et al. Interrater reliability and agreement. , 2000 .
[21] Matthijs J. Warrens. Inequalities Between Kappa and Kappa-Like Statistics for k×k Tables , 2010 .
[22] H. Akaike. A new look at the statistical model identification , 1974 .
[23] M. Tanner,et al. Modeling ordinal scale disagreement. , 1985, Psychological bulletin.
[24] Janis E. Johnston,et al. Weighted Kappa for Multiple Raters , 2008, Perceptual and motor skills.
[25] Dale J. Prediger,et al. Coefficient Kappa: Some Uses, Misuses, and Alternatives , 1981 .
[26] Matthijs J. Warrens,et al. Power Weighted Versions of Bennett, Alpert, and Goldstein’s , 2014 .
[27] R. Light. Measures of response agreement for qualitative data: Some generalizations and alternatives. , 1971 .
[28] W. A. Scott,et al. Reliability of Content Analysis ; The Case of Nominal Scale Cording , 1955 .
[29] J. R. Landis,et al. The measurement of observer agreement for categorical data. , 1977, Biometrics.
[30] M. Aickin. Maximum likelihood estimation of agreement in the constant predictive probability model, and its relation to Cohen's kappa. , 1990, Biometrics.
[31] J. Vegelius,et al. On Generalizations Of The G Index And The Phi Coefficient To Nominal Scales. , 1979, Multivariate behavioral research.
[32] K. Gwet,et al. A comparison of Cohen’s Kappa and Gwet’s AC1 when calculating inter-rater reliability coefficients: a study conducted with personality disorder samples , 2013, BMC Medical Research Methodology.
[33] A. Eye,et al. Can One Use Cohen’s Kappa to Examine Disagreement? , 2005 .
[34] Jacob Cohen,et al. The Equivalence of Weighted Kappa and the Intraclass Correlation Coefficient as Measures of Reliability , 1973 .
[35] T. Allison,et al. A New Procedure for Assessing Reliability of Scoring EEG Sleep Recordings , 1971 .
[36] P. Böelle,et al. Global and partial agreement among several observers. , 1998, Statistics in medicine.
[37] H. Kundel,et al. Measurement of observer agreement. , 2003, Radiology.
[38] L. A. Goodman,et al. Measures of association for cross classifications , 1979 .
[39] Roel Popping,et al. Some views on agreement to be used in content analysis studies , 2010 .
[40] Shiva Gautam. A-Kappa: A measure of Agreement among Multiple Raters , 2021 .
[41] M. Kendall,et al. The Problem of $m$ Rankings , 1939 .
[42] Tulay Saracbasi. Agreement plus Disagreement Model for Agreement Data , 2011 .
[43] N. Shi,et al. On Modelling Agreement and Category Distinguishability on an Ordinal Scale , 2012 .
[44] Matthijs J. Warrens,et al. Cohen’s weighted kappa with additive weights , 2013, Advances in Data Analysis and Classification.
[45] A E Maxwell,et al. Coefficients of Agreement Between Observers and Their Interpretation , 1977, British Journal of Psychiatry.
[46] M. Warrens. Weighted Kappas for Tables , 2013 .
[47] K. Gwet. Handbook of Inter-Rater Reliability: The Definitive Guide to Measuring the Extent of Agreement Among Raters , 2014 .
[48] K. Kastango. Assessing Agreement Among Raters And Identifying Atypical Raters Using A Log-Linear Modeling Approach , 2006 .
[49] C. Schuster,et al. Dispersion-weighted kappa: An integrative framework for metric and nominal scale agreement coefficients , 2005 .
[50] Matthijs J. Warrens,et al. Inequalities between multi-rater kappas , 2010, Adv. Data Anal. Classif..
[51] Shrikant I Bangdiwala,et al. The agreement chart , 2013, BMC Medical Research Methodology.
[52] Martin A. Tanner,et al. Modeling Agreement among Raters , 1985 .
[53] N D Holmquist,et al. Variability in classification of carcinoma in situ of the uterine cervix. , 1967, Archives of pathology.