360° ratings: An analysis of assumptions and a research agenda for evaluating their validity

Abstract This article argues that assumptions surrounding 360° ratings should be examined; most notably, the assumptions that different rating sources have relatively unique perspectives on performance and multiple rating sources provide incremental validity over the individual sources. Studies generally support the first assumption, although reasons for interrater disagreement across different organizational levels are not clear. Two research directions are suggested for learning more about why different organizational levels tend to disagree in their ratings and thus how to improve interpretation of 360° ratings. Regarding the second assumption, it is argued we might resurrect the hypothesis that low-to-moderate across organizational level interrater agreement is actually a positive result, reflecting different levels' raters each making reasonably valid performance judgments but on partially different aspects of job performance. Three approaches to testing this hypothesis are offered.

[1]  Sarah A. Hezlett,et al.  The impact of 360‐degree feedback on management skills development , 1993 .

[2]  N. Schmitt,et al.  Models of Job Performance Ratings: An Examination of Ratee Race, Ratee Gender, and Rater Level Effects , 1996 .

[3]  M. D. Dunnette My hammer or your hammer , 1993 .

[4]  Stanley B. Williams,et al.  Group opinion as a predictor of military leadership. , 1947, Journal of consulting psychology.

[5]  Richard W. Beatty,et al.  360‐degree feedback as a competitive advantage , 1993 .

[6]  Paul D. Nelson,et al.  CRITERION MEASURES FOR EXTREMELY ISOLATED GROUPS , 1966 .

[7]  A. Tsui A role set analysis of managerial reputation , 1984 .

[8]  D. N. Buckner,et al.  THE PREDICTABILITY OF RATINGS AS A FUNCTION OF INTER-RATER AGREEMENT , 1959 .

[9]  Walter W. Tornow,et al.  Perceptions or reality: Is multi-perspective measurement a means or an end? , 1993 .

[10]  J. Adams-Webber Personal construct theory : concepts and applications , 1979 .

[11]  A. H. Church,et al.  Advancing the State of the Art of 360-Degree Feedback , 1997 .

[12]  Anne S. Tsui,et al.  MULTIPLE ASSESSMENT OF MANAGERIAL EFFECTIVENESS: INTERRATER AGREEMENT AND CONSENSUS IN EFFECTIVENESS MODELS , 1988 .

[13]  Walter C. Borman,et al.  The rating of individuals in organizations: An alternate approach , 1974 .

[14]  R. Guion Review of Managerial Behavior, Performance and Effectiveness. , 1971 .

[15]  John P. Campbell,et al.  AN OVERVIEW OF THE ARMY SELECTION AND CLASSIFICATION PROJECT (PROJECT A) , 1990 .

[16]  Walter C. Borman,et al.  Effects of ratee task performance and interpersonal factors on supervisor and peer performance ratings , 1995 .

[17]  John Schaubroeck,et al.  A meta-analysis of self-supervisor, self-peer, and peer-supervisor ratings. , 1988 .

[18]  Walter C. Borman,et al.  Personal constructs, performance schemata, and “folk theories” of subordinate effectiveness: Explorations in an army officer sample☆ , 1987 .

[19]  Nicholas Imparato,et al.  Development of behaviorally anchored rating scales as a function of organizational level. , 1974 .

[20]  Richard J. Klimoski,et al.  Role of the rater in performance appraisal. , 1974 .

[21]  Hillel J. Einhorn,et al.  Expert measurement and mechanical combination , 1972 .

[22]  Jeffrey S. Kane,et al.  Methods of peer assessment. , 1978 .

[23]  E. Hollander Peer nominations on leadership as a predictor of the pass-fail criterion in naval air training. , 1954 .

[24]  S. Zedeck,et al.  Nursing performance as measured by behavioral expectation scales: A multitrait-multirater analysis , 1972 .

[25]  Carl W. Cotman,et al.  The Development of Behavior , 2022 .

[26]  N. H. Berry,et al.  A Note on Supervisor Ratings1 , 1966 .