The problem of citation impact assessments for recent publication years in institutional evaluations

Bibliometrics has become an indispensable tool in the evaluation of institutions (in the natural and life sciences). An evaluation report without bibliometric data has become a rarity. However, evaluations are often required to measure the citation impact of publications in very recent years in particular. As a citation analysis is only meaningful for publications for which a citation window of at least three years is guaranteed, very recent years cannot (should not) be included in the analysis. This study presents various options for dealing with this problem in statistical analysis. The publications from two universities from 2000 to 2011 are used as a sample dataset (n=2652, univ 1=1484 and univ 2=1168). One option is to show the citation impact data (percentiles) in a graphic and to use a line for percentiles regressed on ‘distant’ publication years (with confidence interval) showing the trend for the ‘very recent’ publication years. Another way of dealing with the problem is to work with the concept of samples and populations. The third option (very related to the second) is the application of the counterfactual concept of causality.

[1]  Lutz Bornmann,et al.  What do citation counts measure? A review of studies on citing behavior , 2008, J. Documentation.

[2]  Jian Wang,et al.  Citation time window choice for research impact evaluation , 2013, Scientometrics.

[3]  Thed N. van Leeuwen,et al.  The Leiden ranking 2011/2012: Data collection, indicators, and interpretation , 2012, J. Assoc. Inf. Sci. Technol..

[4]  Thomas Lumley,et al.  Complex Surveys: A Guide to Analysis Using R , 2010 .

[5]  D. Pauly Anecdotes and the shifting baseline syndrome of fisheries. , 1995, Trends in ecology & evolution.

[6]  Allen Hazen,et al.  Closure of "Storage to be Provided in Impounding Municipal Water Supply" , 1914 .

[7]  K. Wolter Introduction to Variance Estimation , 1985 .

[8]  C. Glymour,et al.  STATISTICS AND CAUSAL INFERENCE , 1985 .

[9]  R. Merton,et al.  The Sociology of Science: Theoretical and Empirical Investigations , 1975, Journal for the Scientific Study of Religion.

[10]  Allen Hazen,et al.  Storage to be Provided Impounding Reservoirs for Municipal Water Supply , 1913 .

[11]  Lynne Stokes,et al.  Introduction to Variance Estimation (2nd ed.) , 2008 .

[12]  Lutz Bornmann,et al.  A multilevel modelling approach to investigating the predictive validity of editorial decisions: do the editors of a high profile journal select manuscripts that are highly cited after publication? , 2011 .

[13]  Jesper W. Schneider,et al.  Caveats for using statistical significance tests in research assessments , 2011, J. Informetrics.

[14]  Lutz Bornmann,et al.  The potential and problems of peer evaluation in higher education and research , 2007 .

[15]  Norman Kaplan,et al.  The Sociology of Science: Theoretical and Empirical Investigations , 1974 .

[16]  Leo Egghe,et al.  An informetric model for the Hirsch-index , 2006, Scientometrics.

[17]  Lutz Bornmann,et al.  What factors determine citation counts of publications in chemistry besides their quality? , 2012, J. Informetrics.

[18]  Frauke Kreuter,et al.  Data Analysis Using Stata , 2005 .

[19]  Ronald Rousseau,et al.  Basic properties of both percentile rank scores and the I3 indicator , 2012, J. Assoc. Inf. Sci. Technol..

[20]  Loet Leydesdorff,et al.  Statistical tests and research assessments: A comment on Schneider (2012) , 2012, J. Assoc. Inf. Sci. Technol..

[21]  L. Bornmann,et al.  How good is research really? , 2013, EMBO reports.

[22]  Loet Leydesdorff,et al.  The use of percentiles and percentile rank classes in the analysis of bibliometric data: Opportunities and limits , 2012, J. Informetrics.

[23]  Lutz Bornmann,et al.  How to analyse percentile impact data meaningfully in bibliometrics: The statistical analysis of distributions, percentile rank classes and top-cited papers , 2012, ArXiv.

[24]  Lutz Bornmann,et al.  How to calculate the practical significance of citation impact differences? An empirical example from evaluative institutional bibliometrics using adjusted predictions and marginal effects , 2013, J. Informetrics.

[25]  Richard Van Noorden,et al.  Metrics: Do metrics matter? , 2010, Nature.

[26]  Lutz Bornmann,et al.  Further steps towards an ideal method of measuring citation performance: The avoidance of citation (ratio) averages in field-normalization , 2011, J. Informetrics.

[27]  Loet Leydesdorff,et al.  The new Excellence Indicator in the World Report of the SCImago Institutions Rankings 2011 , 2011, J. Informetrics.

[28]  Lutz Bornmann Assigning publications to multiple subject categories for bibliometric analysis: An empirical case study based on percentiles , 2014, J. Documentation.

[29]  D. Rubin Estimating causal effects of treatments in randomized and nonrandomized studies. , 1974 .

[30]  Lise Doucette,et al.  A Review of Citation Analysis Methodologies for Collection Management , 2012, Coll. Res. Libr..

[31]  David J. Sheskin,et al.  Handbook of Parametric and Nonparametric Statistical Procedures , 1997 .

[32]  Lutz Bornmann,et al.  The advantage of the use of samples in evaluative bibliometric studies , 2013, J. Informetrics.

[33]  Loet Leydesdorff,et al.  Testing differences statistically with the Leiden ranking , 2011, Scientometrics.