Properties of Bangdiwala’s B

Cohen’s kappa is the most widely used coefficient for assessing interobserver agreement on a nominal scale. An alternative coefficient for quantifying agreement between two observers is Bangdiwala’s B. To provide a proper interpretation of an agreement coefficient one must first understand its meaning. Properties of the kappa coefficient have been extensively studied and are well documented. Properties of coefficient B have been studied, but not extensively. In this paper, various new properties of B are presented. Category B-coefficients are defined that are the basic building blocks of B. It is studied how coefficient B, Cohen’s kappa, the observed agreement and associated category coefficients may be related. It turns out that the relationships between the coefficients are quite different for $$2\times 2$$2×2 tables than for agreement tables with three or more categories.

[1]  Jianwen Cai,et al.  Kappa statistic for clustered dichotomous responses from physicians and patients , 2013, Statistics in medicine.

[2]  K. Krippendorff Reliability in Content Analysis: Some Common Misconceptions and Recommendations , 2004 .

[3]  L. Hsu,et al.  Interrater Agreement Measures: Comments on Kappan, Cohen's Kappa, Scott's π, and Aickin's α , 2003 .

[4]  P. E. Crewson,et al.  Fundamentals of Clinical Research for Radiologists , 2005 .

[5]  J. Fleiss,et al.  Statistical methods for rates and proportions , 1973 .

[6]  A. Feinstein,et al.  High agreement but low kappa: I. The problems of two paradoxes. , 1990, Journal of clinical epidemiology.

[7]  J. Fleiss Statistical methods for rates and proportions , 1974 .

[8]  Matthijs J. Warrens,et al.  A Formal Proof of a Paradox Associated with Cohen’s Kappa , 2010, J. Classif..

[9]  L. R. Dice Measures of the Amount of Ecologic Association Between Species , 1945 .

[10]  M. Warrens On Similarity Coefficients for 2×2 Tables and Correction for Chance , 2008, Psychometrika.

[11]  Caroline Reinhold,et al.  Fundamentals of clinical research for radiologists , 2005 .

[12]  M. Warrens Conditional inequalities between Cohen's kappa and weighted kappas , 2013 .

[13]  J. Sim,et al.  The kappa statistic in reliability studies: use, interpretation, and sample size requirements. , 2005, Physical therapy.

[14]  S. Bangdiwala,et al.  Interpretation of Kappa and B statistics measures of agreement , 1997 .

[15]  M. Warrens Five ways to look at Cohen's kappa , 2015 .

[16]  A. Viera,et al.  Understanding interobserver agreement: the kappa statistic. , 2005, Family medicine.

[17]  J. Carlin,et al.  Bias, prevalence and kappa. , 1993, Journal of clinical epidemiology.

[18]  Werner Vach,et al.  The dependence of Cohen's kappa on the prevalence does not matter. , 2005, Journal of clinical epidemiology.

[19]  M. Warrens Correction for chance and correction for maximum value , 2013 .

[20]  Matthijs J. Warrens Inequalities Between Kappa and Kappa-Like Statistics for k×k Tables , 2010 .

[21]  Jacob Cohen A Coefficient of Agreement for Nominal Scales , 1960 .

[22]  Stephen E. Fienberg,et al.  Discrete Multivariate Analysis: Theory and Practice , 1976 .

[23]  V. Shankar,et al.  Behavior of agreement measures in the presence of zero cells and biased marginal distributions , 2008 .

[24]  Matthijs J. Warrens,et al.  Cohen's kappa is a weighted average , 2011 .

[25]  A. Ochiai Zoogeographical Studies on the Soleoid Fishes Found in Japan and its Neighbouring Regions-III , 1957 .

[26]  J. Uebersax Diversity of decision-making models and the measurement of interrater agreement. , 1987 .

[27]  W. Willett,et al.  Misinterpretation and misuse of the kappa statistic. , 1987, American journal of epidemiology.

[28]  P. E. Crewson,et al.  Reader agreement studies. , 2005, AJR. American journal of roentgenology.

[29]  Hubert J. A. Schouten,et al.  Nominal scale agreement among observers , 1986 .