Estimating kappa from binocular data.

A common error in statistical analysis of ophthalmic data is the lack of accounting for the positive correlation generally present between observations made in fellow eyes. The alternative of data analysis from only one eye in each patient may lead to loss of power and unrealistically large confidence intervals. This paper discusses a method to estimate kappa, a measure of agreement between two graders, when both graders rate the same set of pairs of eyes. The method assumes that the true left-eye and right-eye kappa values are equal and makes use of the correlated binocular data to estimate confidence intervals for the common kappa. Simulations show that the new estimators are better than the estimator based on only one eye; new confidence intervals had the correct coverage probability, but were usually only about 70 per cent as wide as single-eye intervals. The general methodology described here applies to analysis of grader agreement in rating other paired body structures.