When ratings from one source have been averaged, but ratings from another source have not : Problems and solutions

Studies using the single-aggregate approach (L. R. James, 1982), where assessments made by individual respondents are correlated with other assessments that have been averaged across multiple respondents, can exhibit a systematic bias of 20 to 70% or more if they are used to estimate individual level relationships. Not only may results of such studies be erroneous, but theory development based on such studies may be misguided. A comprehensive solution (nested and crossed designs) to the single-aggregation problem is provided through generalizability theory. Results show that the aggregation bias is a function of both the generalizability (reliability) of individual responses and the number of individuals per group. Conceptual parallels to classical measurement theory are discussed. Factors are presented for converting single-aggregated correlations and standard deviations to estimates of the corresponding values using the individual as the level of analysis.

[1]  James W. Smither,et al.  CAN MULTI-SOURCE FEEDBACK CHANGE PERCEPTIONS OF GOAL ACCOMPLISHMENT, SELF-EVALUATIONS, AND PERFORMANCE-RELATED OUTCOMES? THEORY-BASED APPLICATIONS AND DIRECTIONS FOR RESEARCH , 1995 .

[2]  K. Klein,et al.  Levels Issues in Theory Development, Data Collection, and Analysis , 1994 .

[3]  Cheri Ostroff The Effects of Climate and Personal Influences on Individual Behavior and Attitudes in Organizations , 1993 .

[4]  Subordinates rating managers: Organizational and demographic correlates of self/subordinate agreement , 1993 .

[5]  Cheri Ostroff,et al.  Comparing Correlations Based on Individual-Level and Aggregated Data , 1993 .

[6]  Francis J. Yammarino,et al.  DOES SELF‐OTHER AGREEMENT ON LEADERSHIP PERCEPTIONS MODERATE THE VALIDITY OF LEADERSHIP AND PERFORMANCE PREDICTIONS? , 1992 .

[7]  John E. Hunter,et al.  Methods of Meta-Analysis: Correcting Error and Bias in Research Findings , 1991 .

[8]  Richard J. Shavelson,et al.  Generalizability Theory: A Primer , 1991 .

[9]  Anne S. Tsui,et al.  A Multiple-Constituency Model of Effectiveness: An Empirical Examination at the Human Resource Subunit Level. , 1990 .

[10]  Anne S. Tsui,et al.  MULTIPLE ASSESSMENT OF MANAGERIAL EFFECTIVENESS: INTERRATER AGREEMENT AND CONSENSUS IN EFFECTIVENESS MODELS , 1988 .

[11]  D. Rousseau Issues of level in organizational research: Multi-level and cross-level perspectives. , 1985 .

[12]  M. Mount,et al.  Psychometric properties of subordinate ratings of managerial performance. , 1984 .

[13]  Michael D. Mumford,et al.  Social Comparison Theory and the Evaluation of Peer Evaluations: A Review and Some Applied Implications. , 1983 .

[14]  A. Tsui Qualities of Judgmental Ratings by Four Rater Sources. , 1983 .

[15]  L. James Aggregation Bias in Estimates of Perceptual Agreement. , 1982 .

[16]  J. Fleiss,et al.  Intraclass correlations: uses in assessing rater reliability. , 1979, Psychological bulletin.

[17]  Glenn Firebaugh,et al.  A Rule for Inferring Individual-Level Relationships from Aggregate Data , 1978 .

[18]  Allen I. Kraut,et al.  Prediction of managerial success by peer and training-staff ratings. , 1975 .

[19]  Donald B. Rubin,et al.  The Dependability of Behavioral Measurements: Theory of Generalizability for Scores and Profiles. , 1974 .

[20]  Karl E. Weick,et al.  Managerial behavior, performance, and effectiveness , 1971 .

[21]  M. R. Novick,et al.  Statistical Theories of Mental Test Scores. , 1971 .

[22]  L. Gordon,et al.  The Cross‐Group Stability Of Peer Ratings Of Leadership Potential , 1965 .

[23]  R. Wherry,et al.  Buddy Ratings: Popularity , 1949 .