This study was conducted to determine the validity of noncognitive and cognitive predictors of the performance of college students at the end of their fourth year in college. Results indicate that the primary predictors of cumulative college GPA were SAT/ACT scores and high school GPA (HSGPA) though biographical data and situational judgment measures added incrementally to this prediction. SAT/ACT scores and HSGPA were collected and used in various ways by participating institutions in the admissions process while situational judgment measures and biodata were collected for research purposes only during the first few weeks of the participating students’ freshman year. Alternative outcomes such as a self-report of performance on a range of student performance dimensions and a measure of organizational citizenship behavior, as well as class absenteeism were best predicted by noncognitive measures. The racial composition of a student body selected using just cognitive measures or both cognitive and noncognitive measures under various levels of selectivity as well as the performance of students admitted under these scenarios is also reported. We conclude that both the biodata and SJT measures could be useful as a supplement to cognitive indices of student potential in college admissions. Alternatives to GPA 3 Prediction of Four-Year College Student Performance using Cognitive and Noncognitive Predictors and the Impact on Demographic Status of Admitted Students As is true when organizations hire employees, colleges and universities seek to admit and recruit the best students. Just as the qualifications that make a good employee vary across organizations or managers, so do the factors underlying notions about excellent student performance. In the educational context, these factors vary as a function of the university or admissions personnel who evaluate student credentials and performance. Traditionally, college admissions personnel use high school grade point averages (HSGPA), standardized tests of cognitive ability in the areas of verbal and mathematical skills (SAT/ACT), and sometimes records of achievement in specific subject matter areas to assess student potential. Each factor provides unique information about the applicant. Letters of recommendation, essays or interviews are being used increasingly by universities to complement these HSGPA and SAT/ACT scores. Schools vary widely in their assessment of the information contained in these supplemental materials. For example, while a reviewer at one school might assign a subjective rating to each component of the application, a reviewer at another school might form ratings of personal qualities (e.g., leadership) based on a holistic review of the materials (Rigol, 2003). Clearly, any systematic and thorough processing of this information, especially when large numbers of applicants must be processed in a short period of time, places a heavy burden on admissions personnel. Standardized cognitive ability tests or achievement tests (like SAT/ACT) can be administered to large numbers of students efficiently and they provide a standard of comparison across students with differing educational backgrounds. Moreover, research has demonstrated consistently high criterion-related validities (approximately r = .45) with cumulative college Alternatives to GPA 4 GPA, in addition to smaller but practically significant relationships with study habits, persistence, and degree attainment (Hezlett et al., 2001). Higher validities are often observed if the outcomes assessed are more proximal such as first year college GPA (Kuncel, Hezlett, & Ones, 2001, 2004). Sackett, Kuncel, Arneson, Cooper & Waters (2009) in a recent study examined various large datasets and found strong relationships between standardized tests and academic performance (r = .44). They found that a vast majority of these relationships were strong even after controlling for factors like socioeconomic status. On the whole both high school grade point average and standardized tests have been shown to have predictive validity in determining a variety of academic performance outcomes (e.g., Bridgeman, McCamley-Jenkins, & Ervin, 2000; Kuncel, Credé, & Thomas, 2007; Kuncel & Hezlett, 2007; Kuncel, Hezlett, & Ones, 2001, 2004). Some college personnel and researchers, however, have reservations about standardized cognitive ability tests. Researchers point to the fact that even with the relatively high validity of the SAT and ACT college admissions tests and HSGPA, there remains a large portion of unexplained variance in college student performance measures (Breland, 1998; Payne, Rapley, & Wells, 1973). Various stakeholders in admissions testing are also becoming strident in demanding a broader array of selection tools with adequate criterion-related validity, less adverse impact, and greater relevance to a broader conceptualization of college performance. As a result of these demands, universities are already changing the role standardized tests (SAT or ACT) play in the selection process. For example, the University of California has begun to use the SAT-II, an instrument more directly tied to high school curricula, for admission decisions. More recently, in 2008, Wake Forest University became the first top 30 national university to make standardized tests (SAT or ACT) optional (Landau, 2008). More generally, a NACAC Alternatives to GPA 5 commission (2008) recommended that the role of standardized tests in college admissions be reevaluated and perhaps diminished. There are a number of potential benefits to be gained from broadening the selection criteria beyond SAT/ACT and HSGPA, but one important benefit is the potential increase in the diversity of students admitted into colleges. Whereas minority students often score lower on cognitive ability tests such as the SAT/ACT, there are small or no differences between majority and minority groups on many noncognitive assessments of background, interests, and motivation (Hough, 1998; Sackett, Schmitt, Ellingson, & Kabin, 2001). These relative differences in the measures translate into different rates of selection across demographic groups depending on the institution’s selectivity and the manner in which the tests are used. The need to incorporate more than just cognitive factors in the admission process has led to a growing interest in non-cognitive predictors of academic performance. Past studies have examined the role of non-cognitive predictors of academic success such as meta-cognitive skills (e.g., Zeegers, 2001), study attitudes (e.g., W.S. Zimmerman, Parks, Gray, & Michael, 1977), study motivation (e.g., Melancon, 2002) and even personality traits (e.g., Ridgell & Lounsbury, 2004). In a more recent meta-analysis, Crede and Kuncel (2008) found that non-cognitive factors like study habit, skill and study motivation among other attitudinal constructs accounted for incremental variance in academic performance beyond standardized tests and previous grades. A challenge, however, in including these non-cognitive predictors and broadening the selection criteria is how to maintain an objective means of comparing applicants on the basis of not only their cognitive ability but also their noncognitive abilities and profiles (e.g., citizenship, perseverance, adaptability). The latter noncognitive attributes are often thought to be represented in essays and interviews, both of which are labor intensive to score in reliable ways, particularly Alternatives to GPA 6 in large undergraduate universities. Consistent with this challenge, our research team, with the support of the College Board, has been working for the last several years to develop and validate two noncognitive measures that would help evaluate applicants on twelve different dimensions relevant to college performance (see Oswald, Schmitt, Kim, Ramsay, & Gillespie, 2004; Schmitt et al., 2003; Schmitt et al., 2007) using an objectively scored format. Oswald et al. reported promising validities for biodata and situational judgment measures for a variety of outcomes measured at the end of the first year of college at one large university. Schmitt et al. (2007) reported encouraging validity against college GPA, absenteeism, and several nonacademic criteria for a group of students from 10 different universities at the conclusion of their first year in college. The focus of the latter study was on using biodata, situational judgment, and ability measures to profile students with differing outcome profiles. In the current article, we report four-year predictive validities for the sample of 2,771 students evaluated in Schmitt et al. (2007) using college GPA, graduation status, class attendance, academic satisfaction, and organizational citizenship behavior as outcomes. In addition, with this sample of students, we examine the consequences of using the biodata and situational judgment measures, SAT/ACT, and HSGPA in a composite to make admissions decisions at varying levels of selectivity. The outcomes with respect to the ethnic diversity of the students admitted and the average GPA under these various hypothetical conditions are reported. Contributions of this Study The current four-year longitudinal study provides predictive validities for both cognitive (test scores and high school grades) and non-cognitive predictors (biodata and SJT) for a variety of academic outcomes like cumulative 4 year college GPA, graduation status, class attendance, academic satisfaction, and organizational citizenship behavior. The present study also illustrates Alternatives to GPA 7 how the use of both cognitive and non-cognitive predictors may influence the ethnic diversity of admitted students at varying levels of selectivity. As was noted at the beginning of this article, the admissions problems of academic administrators are very similar to those of private and public employers in at least four important ways. As in business organizations, there i
[1]
Shonna D. Waters,et al.
Does socioeconomic status explain the relationship between admissions tests and post-secondary academic performance?
,
2009,
Psychological bulletin.
[2]
Anne Anastasi,et al.
The validation of a biographical inventory as a predictor of college success : development and validation of the scoring key
,
1960
.
[3]
Philip L. Roth,et al.
DERIVATION AND IMPLICATIONS OF A META‐ANALYTIC MATRIX INCORPORATING COGNITIVE ABILITY, ALTERNATIVE PREDICTORS, AND JOB PERFORMANCE
,
1999
.
[4]
Robert M. Guion,et al.
Assessment, Measurement, and Prediction for Personnel Decisions
,
1997
.
[5]
M. D. Dunnette,et al.
An alternative selection procedure: The low-fidelity simulation.
,
1990
.
[6]
H. Stumpf,et al.
A validation of the five-factor model of personality in academically talented youth across observers and instruments
,
1998
.
[7]
Walter C. Borman,et al.
Task Performance and Contextual Performance: The Meaning for Personnel Selection Research
,
1997
.
[8]
D. Payne.
Application of a Biographical Data Inventory to Estimate College Academic Achievement.
,
1973
.
[9]
Filip Lievens,et al.
The operational validity of a video-based situational judgment test for medical college admissions: illustrating the importance of matching predictor and criterion construct domains.
,
2005,
The Journal of applied psychology.
[10]
N. Kuncel,et al.
Study Habits, Skills, and Attitudes: The Third Pillar Supporting Collegiate Academic Performance
,
2008,
Perspectives on psychological science : a journal of the Association for Psychological Science.
[11]
Gerald V. Barrett,et al.
Validity of Personnel Decisions: A Conceptual Analysis of the Inferential and Evidential Bases
,
1989
.
[12]
Brent Bridgeman,et al.
Predictions of Freshman Grade-Point Average from the Revised and Recentered SAT[R] I: Reasoning Test. College Board Research Report.
,
2000
.
[13]
R. Bies,et al.
Organizational Citizenship Behavior: The Good Soldier Syndrome
,
1989
.
[14]
Jennifer Hedlund,et al.
Assessing Practical Intelligence in Business School Admissions: A Supplement to the Graduate Management Admissions Test.
,
2006
.
[15]
Michael D. Mumford,et al.
Background Data and Autobiographical Memory: Effects of Item Types and Task Characteristics
,
1999
.
[16]
R. Klimoski,et al.
Is it rational to be empirical? A test of methods for scoring biographical data.
,
1982
.
[17]
Marinella Paciello,et al.
Assessing personality in early adolescence through self-report and other-ratings a multitrait-multimethod analysis of the BFQ-C
,
2008
.
[18]
T. Cleary.
TEST BIAS: PREDICTION OF GRADES OF NEGRO AND WHITE STUDENTS IN INTEGRATED COLLEGES
,
1968
.
[19]
Janet G. Melancon.
Reliability, Structure, and Correlates of Learning and Study Strategies Inventory Scores
,
2002
.
[20]
Paul R. Sackett,et al.
THE EFFECTS OF FORMING MULTI‐PREDICTOR COMPOSITES ON GROUP DIFFERENCES AND ADVERSE IMPACT
,
1997
.
[21]
Timothy J. Pantages,et al.
Studies of College Attrition: 1950––1975
,
1978
.
[22]
N. Schmitt,et al.
Developing a biodata measure and situational judgment inventory as predictors of college student performance.
,
2004,
The Journal of applied psychology.
[23]
Phillip L. Ackerman,et al.
Within-task intercorrelations of skilled performance: Implications for predicting individual differences? A comment on Henry & Hulin, 1987.
,
1989
.
[24]
Jill E. Ellingson,et al.
High-stakes testing in employment, credentialing, and higher education. Prospects in a post-affirmative-action world.
,
2001,
The American psychologist.
[25]
W. Michael,et al.
The Validity of Traditional Cognitive Measures and of Scales of the Study Attitudes and Methods Survey in the Prediction of the Academic Success of Educational Opportunity Program Students
,
1977
.
[26]
G. Stokes,et al.
Content/Construct Approaches in Life History Form Development for Selection
,
2001
.
[27]
K. E. Barron,et al.
Predicting Success in College: A Longitudinal Study of Achievement Goals and Ability Measures as Predictors of Interest and Performance From Freshman Year Through Graduation
,
2002
.
[28]
Filip Lievens,et al.
Combining predictors to achieve optimal trade-offs between selection quality and adverse impact.
,
2007,
The Journal of applied psychology.
[29]
Sarah A. Hezlett,et al.
Academic performance, career potential, creativity, and job performance: can one construct predict them all?
,
2004,
Journal of personality and social psychology.
[30]
Kristy J. Lauver,et al.
Do psychosocial and study skill factors predict college outcomes? A meta-analysis.
,
2004,
Psychological bulletin.
[31]
R. H. Moorman,et al.
Individualism‐collectivism as an individual difference predictor of organizational citizenship behavior
,
1995
.
[32]
R. Dalal.
A meta-analysis of the relationship between organizational citizenship behavior and counterproductive work behavior.
,
2005,
The Journal of applied psychology.
[33]
Neal Schmitt,et al.
Adverse impact and predictive efficiency of various predictor combinations
,
1997
.
[34]
J. L. Holland,et al.
Prediction of student accomplishment in college.
,
1967,
Journal of educational psychology.
[35]
Mary Pommerich,et al.
Concordance Between ACT Assessment and Recentered SAT I Sum Scores.
,
1997
.
[36]
Lyle F. Schoenfeldt,et al.
Toward a classification of persons.
,
1979
.
[37]
John W. Lounsbury,et al.
Predicting Academic Success: General Intelligence, "Big Five" Personality Traits, and Work Drive
,
2004
.
[38]
Gretchen W. Rigol.
Admissions Decision-Making Models: How U.S. Institutions of Higher Education Select Undergraduate Students
,
2003
.
[39]
Neal Schmitt,et al.
The use of background and ability profiles to predict college student outcomes.
,
2007,
The Journal of applied psychology.
[40]
Michael D. Mumford,et al.
Methodology Review: Principles, Procedures, and Findings in the Application of Background Data Measures
,
1987
.
[41]
Michael A. McDaniel,et al.
Use of situational judgment tests to predict job performance: a clarification of the literature.
,
2001,
The Journal of applied psychology.
[42]
Robert E. Ployhart,et al.
Determinants, Detection and Amelioration of Adverse Impact in Personnel Selection Procedures: Issues, Evidence and Lessons Learned
,
2001
.
[43]
Philip Bobko,et al.
TESTING FOR FAIRNESS WITH A MODERATED MULTIPLE REGRESSION STRATEGY: AN ALTERNATIVE TO DIFFERENTIAL ANALYSIS
,
1978
.
[44]
Rebecca A. Henry,et al.
Changing validities: Ability^performance relations and utilities.
,
1989
.
[45]
Fred A. Mael.
A CONCEPTUAL RATIONALE FOR THE DOMAIN AND ATTRIBUTES OF BIODATA ITEMS
,
2006
.
[46]
Robert E. Ployhart,et al.
Staffing Organizations: Contemporary Practice and Theory
,
2005
.
[47]
P. Zeegers,et al.
Approaches to learning in science: a longitudinal study.
,
2001,
The British journal of educational psychology.
[48]
N. Schmitt,et al.
Situational judgment tests: Method or construct?
,
2006
.
[49]
F. Schmidt,et al.
The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 85 years of research findings.
,
1998
.
[50]
Nathan S. Hartman,et al.
SITUATIONAL JUDGMENT TESTS, RESPONSE INSTRUCTIONS, AND VALIDITY: A META‐ANALYSIS
,
2007
.
[51]
W. Sedlacek,et al.
Noncognitive Predictors of Academic Success for International Students: A Longitudinal Study. Research Report #1-87.
,
1988
.
[52]
W. Borman,et al.
Expanding the Criterion Domain to Include Elements of Contextual Performance
,
1993
.
[53]
N. Kuncel,et al.
A Meta-Analysis of the Predictive Validity of the Graduate Management Admission Test (GMAT) and Undergraduate Grade Point Average (UGPA) for Graduate Student Academic Performance.
,
2007
.
[54]
M. Mumford,et al.
Developmental determinants of individual action: Theory and practice in applying background measures.
,
1992
.
[55]
Sarah A. Hezlett,et al.
Standardized Tests Predict Graduate Students' Success
,
2007,
Science.