Statistical tools to improve assessing agreement between several observers.

In the context of assessing the impact of management and environmental factors on animal health, behaviour or performance it has become increasingly important to conduct (epidemiological) studies in the field. Hence, the number of investigated farms per study is considerably high so that numerous observers are needed for investigation. In order to maintain the quality and validity of study results calibration meetings where observers are trained and the current level of agreement is assessed have to be conducted to minimise the observer effect. When study animals were rated independently by the same observers by a categorical variable the exclusion test can be performed to identify disagreeing observers. This statistical test compares for each variable and each observer the observer-specific agreement with the overall agreement among all observers based on kappa coefficients. It accounts for two major challenges, namely the absence of a gold-standard observer and different data type comprising ordinal, nominal and binary data. The presented methods are applied on a reliability study to assess the agreement among eight observers rating welfare parameters of laying hens. The degree to which the observers agreed depended on the investigated item (global weighted kappa coefficients: 0.37 to 0.94). The proposed method and graphical description served to assess the direction and degree to which an observer deviates from the others. It is suggested to further improve studies with numerous observers by conducting calibration meetings and accounting for observer bias.

[1]  Jacob Cohen,et al.  The Equivalence of Weighted Kappa and the Intraclass Correlation Coefficient as Measures of Reliability , 1973 .

[2]  N. P. Baadsgaard,et al.  Intra- and inter-observer agreement of a protocol for clinical examination of dairy cows. , 2006, Preventive veterinary medicine.

[3]  J. Kaler,et al.  The inter- and intra-observer reliability of a locomotion scoring scale for sheep. , 2009, Veterinary journal.

[4]  J. Carlin,et al.  Bias, prevalence and kappa. , 1993, Journal of clinical epidemiology.

[5]  Frede Aakmann Tøgersen,et al.  Evaluation of a lameness scoring system for dairy cows. , 2008, Journal of dairy science.

[6]  Sabine Dippel,et al.  Reliability of a subjective lameness scoring system for dairy cows , 2007 .

[7]  Alan Agresti,et al.  Categorical Data Analysis , 1991, International Encyclopedia of Statistical Science.

[8]  J. R. Landis,et al.  The measurement of observer agreement for categorical data. , 1977, Biometrics.

[9]  F Krummenauer,et al.  The comparison of clinical imaging devices with respect to parallel readings in both devices. , 2006, European journal of medical research.

[10]  J. Nielsen,et al.  Observations of variable inter-observer agreement for clinical evaluation of faecal consistency in grow-finishing pigs. , 2011, Preventive veterinary medicine.

[11]  H. Budka,et al.  Risk assessment of biological hazards for consumer protection , 2012 .

[12]  C. Winckler,et al.  The Reliability and Repeatability of a Lameness Scoring System for Use as an Indicator of Welfare in Dairy Cattle , 2001 .

[13]  K. Svartberg A comparison of behaviour in test and in everyday life: evidence of three consistent boldness-related personality traits in dogs , 2005 .

[14]  Elisabeth Okholm Nielsen,et al.  Observer agreement on pen level prevalence of clinical signs in finishing pigs. , 2004, Preventive veterinary medicine.

[15]  H Stryhn,et al.  Conditional dependence between tests affects the diagnosis and surveillance of animal diseases. , 2000, Preventive veterinary medicine.

[16]  J. Stegeman,et al.  Ability of veterinary pathologists to diagnose classical swine fever from clinical signs and gross pathological findings. , 2004, Preventive veterinary medicine.

[17]  Solveig March,et al.  Effect of training on the inter-observer reliability of lameness scoring in dairy cattle , 2007, Animal Welfare.

[18]  R. Meagher,et al.  Observer ratings: validity and value as a tool for animal welfare research , 2009 .

[19]  J. Sim,et al.  The kappa statistic in reliability studies: use, interpretation, and sample size requirements. , 2005, Physical therapy.

[20]  W. Bessei,et al.  The LayWel project: welfare implications of changes in production systems for laying hens , 2007 .