Quantitative evaluation of alternative field normalization procedures

Wide differences in publication and citation practices make impossible the direct comparison of raw citation counts across scientific disciplines. Recent research has studied new and traditional normalization procedures aimed at suppressing as much as possible these disproportions in citation numbers among scientific domains. Using the recently introduced IDCP (Inequality due to Differences in Citation Practices) method, this paper rigorously tests the performance of six cited-side normalization procedures based on the Thomson Reuters classification system consisting of 172 sub-fields. We use six yearly datasets from 1980 to 2004, with widely varying citation windows from the publication year to May 2011. The main findings are the following three. Firstly, as observed in previous research, within each year the shapes of sub-field citation distributions are strikingly similar. This paves the way for several normalization procedures to perform reasonably well in reducing the effect on citation inequality of differences in citation practices. Secondly, independently of the year of publication and the length of the citation window, the effect of such differences represents about 13% of total citation inequality. Thirdly, a recently introduced two-parameter normalization scheme outperforms the other normalization procedures over the entire period, reducing citation disproportions to a level very close to the minimum achievable given the data and the classification system. However, the traditional procedure of using sub-field mean citations as normalization factors yields also good results.

[1]  E. Garfield The history and meaning of the journal impact factor. , 2006, JAMA.

[2]  H. Small,et al.  Modifying the journal impact factor by fractional citation weighting: The audience factor , 2008 .

[3]  Ludo Waltman,et al.  A systematic empirical comparison of different approaches for normalizing citation impact indicators , 2013, J. Informetrics.

[4]  Loet Leydesdorff,et al.  of Science , 2022 .

[5]  Michael H. MacRoberts,et al.  Problems of citation analysis: A critical review , 1989, JASIS.

[6]  Javier Ruiz-Castillo,et al.  The effect on citation inequality of differences in citation practices at the web of science subject category level , 2014, J. Assoc. Inf. Sci. Technol..

[7]  Koenraad Debackere,et al.  A priori vs. a posteriori normalisation of citation indicators. The case of journal ranking , 2011, Scientometrics.

[8]  Tibor Braun,et al.  Cross-field normalization of scientometric indicators , 1996, Scientometrics.

[9]  Loet Leydesdorff,et al.  Normalization at the field level: fractional counting of citations , 2010, J. Informetrics.

[10]  Wolfgang Glänzel,et al.  The application of characteristic scores and scales to the evaluation and ranking of scientific journals , 2011, J. Inf. Sci..

[11]  Henk F. Moed,et al.  INDICATORS OF RESEARCH PERFORMANCE: APPLICATIONS IN UNIVERSITY RESEARCH POLICY , 1988 .

[12]  Michael H. MacRoberts,et al.  Problems of citation analysis , 1996, Scientometrics.

[13]  Thed N. van Leeuwen,et al.  New bibliometric tools for the assessment of national research performance: Database description, overview of indicators and first applications , 1995, Scientometrics.

[14]  Wolfgang Glänzel,et al.  Subject field characteristic citation scores and scales for assessing research performance , 1987, Scientometrics.

[15]  Tindaro Cicero,et al.  How important is choice of the scaling factor in standardizing citations? , 2012, J. Informetrics.

[16]  Claudio Castellano,et al.  Testing the fairness of citation indicators for comparison across scientific domains: The case of fractional citation counts , 2011, J. Informetrics.

[17]  A. Kinney National scientific facilities and their science impact on nonbiomedical research , 2007, Proceedings of the National Academy of Sciences.

[18]  Peter Taylor,et al.  Citation Statistics , 2009, ArXiv.

[19]  Tibor Braun,et al.  Relative indicators and relational charts for comparative assessment of publication output and citation impact , 1986, Scientometrics.

[20]  Jean-François Molinari,et al.  A new methodology for ranking scientific institutions , 2008, Scientometrics.

[21]  Alexander I. Pudovkin,et al.  Algorithmic procedure for finding semantically related journals , 2002, J. Assoc. Inf. Sci. Technol..

[22]  Pedro Albarrán,et al.  High- and Low-Impact Citation Measures: Empirical Applications , 2010, J. Informetrics.

[23]  Peter Vinkler Relations of relative scientometric indicators , 2004, Scientometrics.

[24]  Javier Ruiz-Castillo,et al.  Differences in citation impact across scientific fields , 2012 .

[25]  Javier Ruiz-Castillo,et al.  The Measurement of Low- and High-Impact in Citation Distributions: Technical Results , 2011, J. Informetrics.

[26]  P. Davis,et al.  Faculty Ratings of Major Economics Departments by Citations , 1984 .

[27]  H. Moed,et al.  The use of bibliometric data for the measurement of university research performance , 1985 .

[28]  Claudio Castellano,et al.  A Reverse Engineering Approach to the Suppression of Citation Biases Reveals Universal Properties of Citation Distributions , 2012, PloS one.

[29]  L. Egghe,et al.  Theory and practise of the g-index , 2006, Scientometrics.

[30]  Henk F. Moed,et al.  Measuring contextual citation impact of scientific journals , 2009, J. Informetrics.

[31]  Javier Ruiz-Castillo,et al.  The comparison of normalization procedures based on different classification systems , 2013, J. Informetrics.

[32]  Tibor Braun,et al.  AGAINST ABSOLUTE METHODS: RELATIVE SCIENTOMETRIC INDICATORS AND RELATIONAL CHARTS AS EVALUATION TOOLS , 1988 .

[33]  J. E. Hirsch,et al.  An index to quantify an individual's scientific research output , 2005, Proc. Natl. Acad. Sci. USA.

[34]  Lutz Bornmann,et al.  What do citation counts measure? A review of studies on citing behavior , 2008, J. Documentation.

[35]  Michel Zitt,et al.  Modifying the journal impact factor by fractional citation weighting: The audience factor , 2008, J. Assoc. Inf. Sci. Technol..

[36]  W. Glänzel,et al.  Scientometric indicators. A 32 country comparison of publication productivity and citation impact , 1985 .

[37]  L. Bornmann,et al.  How good is research really? , 2013, EMBO reports.

[38]  J. Ruiz-Castillo,et al.  The Measurement of the Effect on Citation Inequality of Differences in Citation Practices across Scientific Fields , 2013, PloS one.

[39]  Anthony F. J. van Raan,et al.  Universality of citation distributions revisited , 2011, J. Assoc. Inf. Sci. Technol..

[40]  Peter Vinkler,et al.  Evaluation of some methods for the relative assessment of scientific publications , 1986, Scientometrics.

[41]  Claudio Castellano,et al.  Universality of citation distributions: Toward an objective measure of scientific impact , 2008, Proceedings of the National Academy of Sciences.

[42]  Ludo Waltman,et al.  Source normalized indicators of citation impact: an overview of different approaches and an empirical comparison , 2012, Scientometrics.

[43]  Claudio Castellano,et al.  Field-normalized Impact Factors: A Comparison of Rescaling versus Fractionally Counted IFs , 2012, ArXiv.

[44]  Pedro Albarrán,et al.  The skewness of science in 219 sub-fields and a number of aggregates , 2010, Scientometrics.

[45]  Anthony F. J. van Raan,et al.  Citation Analysis May Severely Underestimate the Impact of Clinical Research as Compared to Basic Research , 2012, PloS one.