ABSTRACT Evaluation of the interobserver concordance in pediatric research: the KappaCoefficient Numerous pediatric research protocols are designed for assessing the degree of concordance between twoobservers; in other words, to determine the extent of their agreement. In order to determine the interobserverconcordance, a frequently used statistical tool is available: the Kappa Coefficient (κ). The present articleexplains the theoretical background of this coefficient, the methodology employed for its calculation andthe way in which its value is correctly interpreted. In simple terms, Kappa Coefficient (κ) corresponds to theproportion of concordances observed among the total number of observations, having excluded allrandom concordances. Kappa Coefficient (κ) adopts a value between -1 y +1, being the strongest degree ofinterobserver concordance equal to +1. On the contrary, a value of κ = 0 reflects that the observedconcordance is precisely the one that is expected by chance. The interpretation of Kappa Coefficient (κ) isperformed by correlating its value with a qualitative scale, which includes six level of strength of agreement("poor", "slight", "fair", "moderate", "substantial" and "almost perfect"), simplifying its comprehension.(