A comparison of two ways of evaluating research units working in different scientific fields

This paper studies the evaluation of research units that publish their output in several scientific fields. A possible solution relies on the prior normalization of the raw citations received by publications in all fields. In a second step, a citation indicator is applied to the units’ field-normalized citation distributions. In this paper, we also study an alternative solution that begins by applying a size- and scale-independent citation impact indicator to the units’ raw citation distributions in all fields. In a second step, the citation impact of any research unit is calculated as the average (weighted by the publication output) of the citation impact that the unit achieves in each field. The two alternatives are confronted using the 500 universities in the 2013 edition of the CWTS Leiden Ranking, whose research output is evaluated according to two citation impact indicators with very different properties. We use a large Web of Science dataset consisting of 3.6 million articles published in the 2005–2008 period, and a classification system distinguishing between 5119 clusters. The main two findings are as follows. Firstly, differences in production and citation practices between the 3332 clusters with more than 250 publications account for 22.5 % of the overall citation inequality. After the standard field-normalization procedure, where cluster mean citations are used as normalization factors, this quantity is reduced to 4.3 %. Secondly, the differences between the university rankings according to the two solutions for the all-sciences aggregation problem are of a small order of magnitude for both citation impact indicators.

[1]  Filippo Radicchi,et al.  Quantitative evaluation of alternative field normalization procedures , 2013, J. Informetrics.

[2]  Javier Ruiz-Castillo,et al.  The effect on citation inequality of differences in citation practices at the web of science subject category level , 2014, J. Assoc. Inf. Sci. Technol..

[3]  Thed N. van Leeuwen,et al.  The Leiden ranking 2011/2012: Data collection, indicators, and interpretation , 2012, J. Assoc. Inf. Sci. Technol..

[4]  Javier Ruiz-Castillo,et al.  The comparison of classification-system-based normalization procedures with source normalization alternatives in Waltman and Van Eck (2013) , 2014, J. Informetrics.

[5]  Antonio Perianes-Rodríguez,et al.  An Alternative to Field-Normalization in the Aggregation of Heterogeneous Scientific Fields , 2015, ISSI.

[6]  F. Bourguignon On the Measurement of Inequality , 2003 .

[7]  James E. Foster,et al.  SUBGROUP CONSISTENT POVERTY INDICES , 1991 .

[8]  Wolfgang Glänzel,et al.  Subject field characteristic citation scores and scales for assessing research performance , 1987, Scientometrics.

[9]  Antonio Perianes-Rodríguez,et al.  University citation distributions , 2015, ISSI.

[10]  Claudio Castellano,et al.  A Reverse Engineering Approach to the Suppression of Citation Biases Reveals Universal Properties of Citation Distributions , 2012, PloS one.

[11]  Anthony F. J. van Raan,et al.  Universality of citation distributions revisited , 2011, J. Assoc. Inf. Sci. Technol..

[12]  Ludo Waltman,et al.  Field-normalized citation impact indicators using algorithmically constructed classification systems of science , 2015, J. Informetrics.

[13]  Claudio Castellano,et al.  Universality of citation distributions: Toward an objective measure of scientific impact , 2008, Proceedings of the National Academy of Sciences.

[14]  Mike Thelwall,et al.  Distributions for cited articles from individual subjects and years , 2014, J. Informetrics.

[15]  Javier Ruiz-Castillo,et al.  Multiplicative versus fractional counting methods for co-authored publications. The case of the 500 universities in the Leiden Ranking , 2015, J. Informetrics.

[16]  Javier Ruiz-Castillo,et al.  The Measurement of Low- and High-Impact in Citation Distributions: Technical Results , 2011, J. Informetrics.

[17]  魏屹东,et al.  Scientometrics , 2018, Encyclopedia of Big Data.

[18]  Ludo Waltman,et al.  A new methodology for constructing a publication-level classification system of science , 2012, J. Assoc. Inf. Sci. Technol..

[19]  E. Thorbecke,et al.  A Class of Decomposable Poverty Measures , 1984 .

[20]  Richard A. Groeneveld,et al.  Measuring Skewness and Kurtosis , 1984 .

[21]  J. Ruiz-Castillo,et al.  The Measurement of the Effect on Citation Inequality of Differences in Citation Practices across Scientific Fields , 2013, PloS one.

[22]  Loet Leydesdorff,et al.  The new Excellence Indicator in the World Report of the SCImago Institutions Rankings 2011 , 2011, J. Informetrics.

[23]  Ludo Waltman,et al.  A systematic empirical comparison of different approaches for normalizing citation impact indicators , 2013, J. Informetrics.

[24]  Ludo Waltman,et al.  On the calculation of percentile-based bibliometric indicators , 2012, J. Assoc. Inf. Sci. Technol..

[25]  Pedro Albarrán,et al.  The skewness of science in 219 sub-fields and a number of aggregates , 2010, Scientometrics.

[26]  Michal Brzezinski,et al.  Power laws in citation distributions: evidence from Scopus , 2014, Scientometrics.