RETEST EFFECTS IN OPERATIONAL SELECTION SETTINGS: DEVELOPMENT AND TEST OF A FRAMEWORK

This study proposes a framework for examining the effects of retaking tests in operational selection settings. A central feature of this framework is the distinction between within-person and between-person retest effects. This framework is used to develop hypotheses about retest effects for exemplars of three types of tests (knowledge tests, cognitive ability tests, and situational judgment tests) and to test these hypotheses in a high stakes selection setting (admission to medical studies in Belgium). Analyses of within-person retest effects showed that mean scores of repeat test takers were one-third of a standard deviation higher for the knowledge test and situational judgment test and one-half of a standard deviation higher for the cognitive ability test. The validity coefficients for the knowledge test differed significantly depending on whether examinees’ test scores on the first versus second administration were used, with the latter being more valid. Analyses of between-person retest effects on the prediction of academic performance showed that the same test score led to higher levels of performance for those passing on the first attempt than for those passing on the second attempt. The implications of these results are discussed in light of extant retesting practice.

[1]  Filip Lievens,et al.  The operational validity of a video-based situational judgment test for medical college admissions: illustrating the importance of matching predictor and criterion construct domains. , 2005, The Journal of applied psychology.

[2]  F. Lievens,et al.  Situational Judgment Tests and their Predictiveness of College Students’ Success: The Influence of Faking , 2005 .

[3]  N. Schmitt,et al.  Developing a biodata measure and situational judgment inventory as predictors of college student performance. , 2004, The Journal of applied psychology.

[4]  P. Sackett,et al.  Differential prediction and the use of multiple predictors: the omitted variables problem. , 2003, The Journal of applied psychology.

[5]  John P. Hausknecht,et al.  Retaking ability tests in a selection setting: implications for practice effects, training performance, and turnover. , 2002, The Journal of applied psychology.

[6]  Michael A. Campion,et al.  Use of situational judgment tests to predict job performance: a clarification of the literature. , 2001, The Journal of applied psychology.

[7]  Derek C. Briggs The Effect of Admissions Test Preparation: Evidence from NELS:88 , 2001 .

[8]  A. Ryan,et al.  Variance in faking across noncognitive measures. , 2000, The Journal of applied psychology.

[9]  A. Ryan,et al.  Applicant self-selection: correlates of withdrawal from a multiple hurdle process. , 2000, The Journal of applied psychology.

[10]  P. Sackett,et al.  Correction for range restriction: an expanded typology. , 2000, The Journal of applied psychology.

[11]  Donald A. Rock,et al.  Effects of coaching on SAT I: Reasoning Test scores. , 1999 .

[12]  Morell E. Mullins,et al.  PARALLEL TEST FORM DEVELOPMENT: A PROCEDURE FOR ALTERNATE PREDICTORS AND AN EXAMPLE , 1998 .

[13]  E. Burke A short note on the persistence of retest effects on aptitude scores , 1997 .

[14]  N. Schmitt,et al.  Video-based versus paper-and-pencil method of assessment in situational judgment tests: subgroup differences in test performance and face validity perceptions. , 1997, The Journal of applied psychology.

[15]  R. Jacobs,et al.  Effects of multiple administrations of the MMPI for employee screening. , 1994 .

[16]  M. Ree,et al.  Sign changes when correcting for range restriction: A note on Pearson's and Lawley's selection formulas. , 1994 .

[17]  Betsy Jane Becker,et al.  Coaching for the Scholastic Aptitude Test: Further Synthesis and Appraisal , 1990 .

[18]  Donald E. Powers Relations of test item characteristics to test preparation/test practice effects: A quantitative summary. , 1986 .

[19]  Jorge L. Mendoza,et al.  A Step-Down Hierarchical Multiple Regression Analysis for Examining Hypotheses About Test Bias in Prediction , 1986 .

[20]  James A. Kulik,et al.  Effects of Practice on Aptitude and Achievement Test Scores , 1984 .

[21]  James A. Kulik,et al.  Effectiveness of coaching for aptitude tests. , 1984 .

[22]  Nan M. Laird,et al.  Evaluating the Effect of Coaching on SAT Scores: A Meta-Analysis , 1983 .

[23]  Samuel Messick,et al.  Time and Method in Coaching for the SAT. , 1981 .

[24]  R. Dawes A case study of graduate admissions: Application of three principles of human decision making. , 1971 .

[25]  T. Cleary TEST BIAS: PREDICTION OF GRADES OF NEGRO AND WHITE STUDENTS IN INTEGRATED COLLEGES , 1968 .

[26]  Sarah A. Hezlett,et al.  A comprehensive meta-analysis of the predictive validity of the graduate record examinations: implications for graduate student selection and performance. , 2001, Psychological bulletin.

[27]  F. Schmidt,et al.  The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 85 years of research findings. , 1998 .

[28]  A. Jensen,et al.  The g factor , 1996, Nature.

[29]  Xiao-Li Meng,et al.  Comparing correlated correlation coefficients , 1992 .

[30]  A. Ryan,et al.  Coaching and practice effects in personnel selection , 1989 .

[31]  N. Tallent Psychological testing. , 1960, The American journal of nursing.