INTRARATER RELIABILITY
暂无分享,去创建一个
The notion of intrarater reliability will be of interest to researchers concerned about the reproducibility of clinical measurements. A rater in this context refers to any datagenerating system, which includes individuals and laboratories; intrarater reliability is a metric for rater’s self-consistency in the scoring of subjects. The importance of data reproducibility stems from the need for scientific inquiries to be based on solid evidence. Reproducible clinical measurements are recognized as representing a well-defined characteristic of interest. Reproducibility is a source of concern caused by the extensive manipulation of medical equipment in test laboratories and the complexity of the judgmental processes involved in clinical data gathering. Grundy (1) stresses the importance of choosing a good laboratory when measuring cholesterol levels to ensure their validity and reliability. This article discusses some basic methodological aspects related to intrarater reliability estimation. For continuous data, the intraclass correlation (ICC) is the measure of choice and will be discussed in the section entitled ‘‘Intrarater reliability for continuous scores.’’ For nominal data, the kappa coefficient of Cohen (2) and its many variants are the preferred statistics, and they are discussed in the section entitled ‘‘nominal scale score data.’’ The last section is devoted to some extensions of kappa-like statistics aimed at intrarater reliability coefficients for ordinal and interval data.