Inferring Competencies 2 Easing the Inferential Leap in Competency Modeling : The Effects of Task-Related Information and Subject Matter Expertise

Despite the rising popularity of the practice of competency modeling, research on competency modeling has lagged behind. This study begins to close this practice‐science gap through 3 studies (1 lab study and 2 field studies), which employ generalizability analysis to shed light on (a) the quality of inferences made in competency modeling and (b) the effects of incorporating elements of traditional job analysis into competency modeling to raise the quality of competency inferences. Study 1 showed that competency modeling resulted in poor interrater reliability and poor between-job discriminant validity amongst inexperienced raters. In contrast, Study 2 suggested that the quality of competency inferences was higher among a variety of job experts in a real organization. Finally, Study 3 showed that blending competency modeling efforts and task-related information increased both interrater reliability among SMEs and their ability to discriminate among jobs. In general, this set of results highlights that the inferences made in competency modeling should not be taken for granted, and that practitioners can improve competency modeling efforts by incorporating some of the methodological rigor inherent in job analysis.

[1]  Robert W. Eichinger,et al.  The LEADERSHIP ARCHITECT norms and validity report , 2003 .

[2]  Edward E. Lawler Competencies: A Poor Foundation for TheNewPay , 1996 .

[3]  Edwin T. Cornelius,et al.  A comparison of holistic and decomposed judgment strategies in job analyses by job incumbents. , 1980 .

[4]  Ernest S. Primoff How to prepare and conduct job element examinations , 1975 .

[5]  S. J. Motowidlo,et al.  Evidence that task performance should be distinguished from contextual performance. , 1994 .

[6]  S. Levitus,et al.  US Government Printing Office , 1998 .

[7]  Paul E. Spector,et al.  Relations of job characteristics from multiple data sources with employee affect, absence, turnover intentions, and health. , 1991, The Journal of applied psychology.

[8]  Kristin O. Prien,et al.  Perspectives on Nonconventional Job Analysis Methodologies , 2003 .

[9]  Juan I. Sanchez,et al.  Moderators of agreement between incumbent and non‐incumbent ratings of job characteristics , 1997 .

[10]  Deidra J. Schleicher,et al.  A field study of the effects of rating purpose on the quality of multisource ratings. , 2003 .

[11]  Paul E. Spector,et al.  Negative affectivity as the underlying cause of correlations between stressors and strains. , 1991, The Journal of applied psychology.

[12]  Donald B. Rubin,et al.  The Dependability of Behavioral Measurements: Theory of Generalizability for Scores and Profiles. , 1974 .

[13]  Eva Nick,et al.  The dependability of behavioral measurements: theory of generalizability for scores and profiles , 1973 .

[14]  David V. Day,et al.  Effects of frame-of-reference training on rater accuracy under alternative time delays , 1994 .

[15]  Sidney Gael The Job analysis handbook for business, industry, and government , 1988 .

[16]  Edward L. Levine,et al.  Determining important tasks within jobs: A policy-capturing approach. , 1989 .

[17]  Erich C. Dierdorff,et al.  A meta-analysis of job analysis reliability. , 2003, The Journal of applied psychology.

[18]  Ronald A. Ash,et al.  THE PRACTICE OF COMPETENCY MODELING , 2000 .

[19]  E. Prien,et al.  EVALUATION OF TASK AND JOB SKILL LINKAGE JUDGMENTS USED TO DEVELOP TEST SPECIFICATIONS , 1989 .

[20]  R. Brennan Elements of generalizability theory , 1983 .

[21]  Robert J. Harvey,et al.  Influence of amount of job descriptive information on job analysis rating accuracy. , 1988 .

[22]  Raymond E. Christal,et al.  The United States Air Force Occupational Research Project. , 1974 .

[23]  H. Griffiths Functional job analysis. , 1946, The British journal of physical medicine : including its application to industry.

[24]  Dennis Doverspike,et al.  Generalizability analysis of a point-method job evaluation instrument. , 1983 .

[25]  Phil Lewis,et al.  Revision of O*NET Data Collection Instruments , 2000 .

[26]  Edward L. Levine,et al.  Accuracy or consequential validity: which is the better standard for job analysis data? , 2000 .

[27]  Robert J. Harvey,et al.  CAN RATERS WITH REDUCED JOB DESCRIPTIVE INFORMATION PROVIDE ACCURATE POSITION ANALYSIS QUESTIONNAIRE (PAQ) RATINGS , 1986 .

[28]  N. Anderson,et al.  Handbook of Industrial, Work & Organizational Psychology , 2001 .

[29]  E. Levine,et al.  The Analysis of Work in the 20th and 21st Centuries , 2001 .

[30]  William Vance Clemans,et al.  An analytical and empirical examination of some properties of ipsative measures , 1967 .

[31]  Robert L. Dipboye,et al.  Effects of training and information on the accuracy and reliability of job evaluations. , 1988 .

[32]  Angelo S. DeNisi,et al.  EXPERT AND NAIVE RATERS USING THE PAQ: DOES IT MATTER? , 1984 .

[33]  F. Morgeson,et al.  Self-presentation processes in job analysis: a field experiment investigating inflation in abilities, tasks, and competencies. , 2004, The Journal of applied psychology.

[34]  Robert D. Gatewood,et al.  Human Resource Selection , 1997 .

[35]  Robert J. Harvey,et al.  A comparison of holistic versus decomposed rating of Position Analysis Questionnaire work dimensions. , 1988 .

[36]  Juan I. Sanchez,et al.  The impact of raters' cognition on judgment accuracy: An extension to the job analysis domain , 1994 .

[37]  Steven F. Cronshaw,et al.  Generalizability analysis of a point method job evaluation instrument: A field study. , 1984 .

[38]  Juan I. Sanchez Adapting Work Analysis to a Fast-Paced and Electronic Business World , 2000 .

[39]  Michael A. Campion,et al.  Accuracy in job analysis: toward an inference‐based model , 2000 .

[40]  O. F. Voskuijl,et al.  Determinants of Interrater Reliability of Job Analysis: A Meta-analysis , 2002 .

[41]  Barry Gerhart,et al.  Sources of variance in incumbent perceptions of job complexity. , 1988 .

[42]  G V Barrett,et al.  A reconsideration of testing for competence rather than for intelligence. , 1991, The American psychologist.

[43]  Gerald V. Barrett,et al.  A Reconsideration of Testing for Competence Rather than for Intelligence. , 1991 .

[44]  Felix M. Lopez,et al.  AN EMPIRICAL TEST OF A TRAIT‐ORIENTED JOB ANALYSIS TECHNIQUE , 1981 .

[45]  Keith H. Mandabach,et al.  A Note on the Reliability of Ranked Items , 2002 .

[46]  T. K. Srull,et al.  Category accessibility and social perception: Some implications for the study of person memory and interpersonal judgments , 1980 .

[47]  Chet Robie,et al.  A new look at within-source interrater reliability of 360-degree feedback ratings. , 1998 .

[48]  John H. Holland,et al.  Induction: Processes of Inference, Learning, and Discovery , 1987, IEEE Expert.

[49]  Edwin T. Cornelius,et al.  A comparison of holistic and decomposed judgment strategies in job analyses by job incumbents. , 1980 .

[50]  Brian E. Becker,et al.  The HR Scorecard: Linking People, Strategy, and Performance , 2001 .

[51]  Paul R. Sackett,et al.  Job and Work Analysis , 2003 .

[52]  C. C. Leek,et al.  Job Analysis , 1970 .

[53]  R. Tett,et al.  Development and Content Validation of a "Hyperdimensional" Taxonomy of Managerial Competence , 2000 .

[54]  Michael A. Campion,et al.  Social and Cognitive Sources of Potential Inaccuracy in Job Analysis , 1997 .

[55]  William P. Dunlap,et al.  Analysis of variance with ipsative measures , 1997 .

[56]  W. Borman,et al.  TIME‐SPENT RESPONSES AS TIME ALLOCATION STRATEGIES: RELATIONS WITH SALES PERFORMANCE IN A STOCKBROKER SAMPLE , 2006 .