Inter-observer and intra-observer variability of the Oxford clinical cataract classification and grading system

Intra-observer (within observers) and inter-observer (between observers) variability of the Oxford Clinical Cataract Classification and Grading System were studied. Twenty cataracts were examined and scored independently by four observers. On a separate occasion two of the observers repeated the assessments of the same cataracts in the absence of information from the initial observations. The chance corrected and weighted kappa statistics for observer agreement, both for inter-observer and intra-observer variability demonstrated satisfactory repeatibility of the cataract grading system. The overall intra-observer mean weighted kappa was χw = +0.68 (range SE χ = 0.012–0.052) and the overall inter-observer mean weighted kappa was χw = +0.55 (range SE χ = 0.011–0.043).

[1]  A. Hill Making decisions in ophthalmology , 1987 .

[2]  H F Sanderson,et al.  Observer variation in ophthalmology. , 1980, The British journal of ophthalmology.

[3]  L. Chylack,et al.  Classification of human senile cataractous changes by the American Cooperative Cataract Research Group (CCRG) method. I. Instrumentation and technique. , 1983, Investigative ophthalmology & visual science.

[4]  Diagnostic standardization , 1979 .

[5]  J. Fleiss Statistical methods for rates and proportions , 1974 .

[6]  Standardizing Diagnostic Procedures , 1975 .

[7]  N. Rich,et al.  Postnatal development of corneal endothelium. , 1986, Investigative ophthalmology & visual science.

[8]  F. Ederer,et al.  Epidemiologic associations with cataract in the 1971-1972 National Health and Nutrition Examination Survey. , 1983, American journal of epidemiology.

[9]  D. Cicchetti,et al.  Developing criteria for establishing interrater reliability of specific items: applications to assessment of adaptive behavior. , 1981, American journal of mental deficiency.

[10]  J. R. Landis,et al.  The measurement of observer agreement for categorical data. , 1977, Biometrics.

[11]  D. Cicchetti,et al.  Assessment of observer variability in the classification of human cataracts. , 1982, The Yale journal of biology and medicine.

[12]  A. R. Hill,et al.  The Oxford Clinical Cataract Classification and Grading System , 1986, International Ophthalmology.

[13]  W. Grove Statistical Methods for Rates and Proportions, 2nd ed , 1981 .

[14]  T. Dawber,et al.  The Framingham Eye Study monograph: An ophthalmological and epidemiological study of cataract, glaucoma, diabetic retinopathy, macular degeneration, and visual acuity in a general population of 2631 adults, 1973-1975. , 1980, Survey of ophthalmology.

[15]  Jacob Cohen,et al.  Weighted kappa: Nominal scale agreement provision for scaled disagreement or partial credit. , 1968 .