Why Meta-Analysis Doesn’t Tell Us What the Data Really Mean: Distinguishing between Moderator Effects and Moderator Processes

Traditional approaches to detecting the presence of moderators in meta-analyses involve inferences drawn from the residual variance in criterion-related validities (a) after correcting for sampling error and statistical artifacts. James, Demaree, Mulaik, and Ladd (1992) argued that these residualized interpretations of meta-analytic results may be spurious when statistical artifacts covary with true moderators. We extend their model to suggest that situational moderators might also covary with sample size and content (i.e., nonrandom sample selection error), causing meta-analysis to be uninterpretable and a significant correlation between criterion-related validities and ni. We investigate this possibility on studies examining criterion-related validities ofpeer nominations originally reported by Kane and Lawler (1978). Application of residualized meta-analysis suggests the presence of moderator effects, but a significant correlation between ri and ni precludes interpretation of the moderator process behind these effects. More generally, we argue that the nature of true contingencies cannot be inferred from meta-analytic summaries of traditional criterion-related validity studies. Primary research with appropriate controls is the only means of identifying true moderator effects and processes on criterion-related validity.

[1]  Randall P. Settoon,et al.  Investigator characteristics as moderators of personnel selection research: a meta-analysis , 1994 .

[2]  C. Judd,et al.  Statistical difficulties of detecting interactions and moderator effects. , 1993, Psychological bulletin.

[3]  Frank L. Schmidt,et al.  What do data really mean? Research findings, meta-analysis, and cumulative knowledge in psychology. , 1992 .

[4]  Lawrence R. James,et al.  Validity generalization in the context of situational models , 1992 .

[5]  John E. Hunter,et al.  Methods of Meta-Analysis: Correcting Error and Bias in Research Findings , 1991 .

[6]  Michael D. Mumford,et al.  Patterns of Life History: The Ecology of Human Individuality , 1990 .

[7]  Michael D. Mumford,et al.  Validity generalization: Rejoinder to Schmidt, Hunter, and Raju (1988). , 1988 .

[8]  John E. Hunter,et al.  Validity Generalization and Situational Specificity: A Second Look at the 75% Rule and Fisher's z Transformation , 1988 .

[9]  What is the interpretation of the validity generalization estimate S²p=S²r-S²e? , 1988 .

[10]  Edward R. Kemery,et al.  The power of the Schmidt and Hunter Additive Model of Validity Generalization. , 1987 .

[11]  N. Schmitt,et al.  ON SHIFTING STANDARDS FOR CONCLUSIONS REGARDING VALIDITY GENERALIZATION , 1986 .

[12]  Lawrence R. James,et al.  A Note on Validity Generalization Procedures. , 1986 .

[13]  Michael A. McDaniel,et al.  INTERPRETING THE RESULTS OF META‐ANALYTIC RESEARCH: A COMMENT ON SCHMITT, GOODING, NOE, AND KIRSCH (1984) , 1986 .

[14]  L. Hedges,et al.  Statistical Methods for Meta-Analysis , 1987 .

[15]  John E. Hunter,et al.  FORTY QUESTIONS ABOUT VALIDITY GENERALIZATION AND META‐ANALYSIS , 1985 .

[16]  Michael P. Kirsch,et al.  METAANALYSES OF VALIDITY STUDIES PUBLISHED BETWEEN 1964 AND 1982 AND THE INVESTIGATION OF STUDY CHARACTERISTICS , 1984 .

[17]  J. Hunter,et al.  Validity and Utility of Alternative Predictors of Job Performance , 1984 .

[18]  R. Berk An introduction to sample selection bias in sociological data. , 1983 .

[19]  Gregg B. Jackson,et al.  Meta-Analysis: Cumulating Research Findings Across Studies , 1982 .

[20]  Jeffrey S. Kane,et al.  Methods of peer assessment. , 1978 .

[21]  John E. Hunter,et al.  Development of a general solution to the problem of validity generalization. , 1977 .

[22]  D. Campbell,et al.  EXPERIMENTAL AND QUASI-EXPERIMENT Al DESIGNS FOR RESEARCH , 2012 .