Mend It, Don’t End It: An Alternate View of Assessment Center Construct-Related Validity Evidence

The unitarian conceptualization of validity serves as the conceptual and logical basis for the so-called assessment center (AC) ‘‘construct-related validity paradox.’’ Within the unitarian framework, at a theoretical level, if a measurement tool demonstrates criterion-related and content-related validity evidence, as is widely accepted with ACs, then it should also be expected to demonstrate construct-related validity evidence (Binning & Barrett, 1989). And because ACs do not appear to do so, we have the resultant AC construct-related validity paradox. So, accepting the premise that the unitarian view is conceptually and logically sound, what is the explanation for the paradox? Why do AC dimension ratings appear not to ‘‘work’’ in terms of construct-related validity evidence? At a broad conceptual level, we present a view that is contrarian to Lance’s (2008) view of ‘‘why ACs don’t work the way they’re supposed to’’ and subsequently what to do about them. Our contrarian view is based on two key points, namely that the vast majority of the empirical AC research to date—particularly that which serves as the basis for calls for the ‘‘redesign of ACs toward taskor role-based ACs and away from traditional dimensionbased ACs’’ (Lance, 2008, p. 84)—is based on (a) espoused as opposed to actual constructs and (b) flawed analysis resulting from an overemphasis on postexercise dimension ratings as measures of AC dimensions. In our view, ACs in practice appear to be effectively designed to representatively sample from the job content domain and also predict criteria of interest, but they are woefully deficient in their construct explication and development. Consequently, we do not concur with Lance’s interpretation of the extant literature and his conclusions concerning what to do about it. Our position is that the issue is not one of a failure in ‘‘AC theory’’ but rather a failure to engage in appropriate tests of said theory. Until such tests have been undertaken, we think it is premature to abandon Correspondence concerning this article should be addressed to Winfred Arthur, Jr. E-mail: wea@psyc. tamu.edu Address: Department of Psychology, Texas AM Eric Anthony Day, Department of Psychology, The University of Oklahoma; David J. Woehr, Department of Management, The Universityof Tennessee. Industrial and Organizational Psychology, 1 (2008), 105–111. Copyright a 2008 Society for Industrial and Organizational Psychology. 1754-9426/08

[1]  F. Lievens Factors which Improve the Construct Validity of Assessment Centers: A Review , 1998 .

[2]  Adam W. Meade,et al.  A Monte Carlo Investigation of Assessment Center Construct Validity Models , 2007 .

[3]  Charles E. Lance,et al.  Why Assessment Centers Do Not Work the Way They Are Supposed To , 2008, Industrial and Organizational Psychology.

[4]  Gerald V. Barrett,et al.  Validity of Personnel Decisions: A Conceptual Analysis of the Inferential and Evidential Bases , 1989 .

[5]  Winfred Arthur,et al.  The Construct-Related Validity of Assessment Center Ratings: A Review and Meta-Analysis of the Role of Methodological Factors , 2003 .

[6]  H. G. Osburn,et al.  Effects of the Rating Process on the Construct Validity of Assessment Center Dimension Evaluations , 2000 .

[7]  A. Ryan Defining Ourselves: I-O Psychology's Identity Quest , 2003 .

[8]  Alex Howard,et al.  A reassessment of assessment centers: Challenges for the 21st century , 1997 .

[9]  Paul R. Sackett,et al.  Constructs and assessment center dimensions: Some troubling empirical findings , 1982 .

[10]  S. Zedeck A process analysis of the assessment center method. , 1986 .

[11]  Anton J. Villado,et al.  The importance of distinguishing between constructs and methods when comparing predictors in personnel selection research and practice. , 2008, The Journal of applied psychology.

[12]  S. Messick Validity of Psychological Assessment: Validation of Inferences from Persons' Responses and Performances as Scientific Inquiry into Score Meaning. Research Report RR-94-45. , 1994 .

[13]  S. Urbina,et al.  Psychological testing, 7th ed. , 1997 .

[14]  Mark C. Bowler,et al.  A meta-analytic evaluation of the impact of dimension and exercise factors on assessment center ratings. , 2006, The Journal of applied psychology.

[15]  S. Messick Validity of Psychological Assessment: Validation of Inferences from Persons' Responses and Performances as Scientific Inquiry into Score Meaning. Research Report RR-94-45. , 1994 .

[16]  Eric Anthony Day,et al.  A META‐ANALYSIS OF THE CRITERION‐RELATED VALIDITY OF ASSESSMENT CENTER DIMENSIONS , 2003 .

[17]  Frank J. Landy,et al.  Stamp collecting versus science: Validation as hypothesis testing. , 1986 .

[18]  Winfred Arthur,et al.  Convergent and Discriminant Validity of Assessment Center Dimensions: A Conceptual and Empirical Reexamination of the Assessment Center Construct-Related Validity Paradox , 2000 .

[19]  Alija Kulenović,et al.  Standards for Educational and Psychological Testing , 1999 .