Proposal of using scaling for calculating field-normalized citation scores

Since the end of the 1980s, citation impact values –especially for evaluative purposes– are increasingly presented as fieldnormalized citation scores than as bare citation counts or citation rates. In rather popular variants of the scores, the average score over a publication year is not exactly one due to multiple Web of Science subject categories per paper. We propose a scaling method which introduces slight changes in the field-normalized scores of each paper that ensures that the average value of all scores equals one.

[1]  Lutz Bornmann,et al.  Further steps towards an ideal method of measuring citation performance: The avoidance of citation (ratio) averages in field-normalization , 2011, J. Informetrics.

[2]  Thed N. van Leeuwen,et al.  Towards a new crown indicator: Some theoretical considerations , 2010, J. Informetrics.

[3]  Peter Vinkler,et al.  The Evaluation of Research by Scientometric Indicators , 2010 .

[4]  Ludo Waltman,et al.  On the calculation of percentile-based bibliometric indicators , 2012, J. Assoc. Inf. Sci. Technol..

[5]  Claudio Castellano,et al.  Rescaling citations of publications in Physics , 2010, Physical review. E, Statistical, nonlinear, and soft matter physics.

[6]  Ludo Waltman,et al.  A new methodology for constructing a publication-level classification system of science , 2012, J. Assoc. Inf. Sci. Technol..

[7]  Claudio Castellano,et al.  Universality of citation distributions: Toward an objective measure of scientific impact , 2008, Proceedings of the National Academy of Sciences.

[8]  Tibor Braun,et al.  Relative indicators and relational charts for comparative assessment of publication output and citation impact , 1986, Scientometrics.

[9]  Thed N. van Leeuwen,et al.  Rivals for the crown: Reply to Opthof and Leydesdorff , 2010, J. Informetrics.

[10]  Jonas Lundberg,et al.  Lifting the crown - citation z-score , 2007, J. Informetrics.

[11]  S. Rijcke,et al.  Bibliometrics: The Leiden Manifesto for research metrics , 2015, Nature.

[12]  Loet Leydesdorff,et al.  Caveats for the journal and field normalizations in the CWTS ("Leiden") evaluations of research performance , 2010, J. Informetrics.

[13]  Thed N. van Leeuwen,et al.  The Leiden ranking 2011/2012: Data collection, indicators, and interpretation , 2012, J. Assoc. Inf. Sci. Technol..

[14]  Lutz Bornmann,et al.  Is interactive open access publishing able to identify high-impact submissions? A study on the predictive validity of Atmospheric Chemistry and Physics by using percentile rank classes , 2011, J. Assoc. Inf. Sci. Technol..

[15]  Javier Ruiz-Castillo,et al.  Sub-Field Normalization in the Multiplicative Case: Average-Based Citation Indicators , 2011, J. Informetrics.

[16]  Lutz Bornmann,et al.  Selecting manuscripts for a high-impact journal through peer review: A citation analysis of communications that were accepted by Angewandte Chemie International Edition, or rejected but published elsewhere , 2008 .

[17]  Tibor Braun,et al.  Reference standards for citation based assessments , 2005, Scientometrics.

[18]  Vincent Larivière,et al.  Averages of ratios vs. ratios of averages: An empirical analysis of four levels of aggregation , 2011, J. Informetrics.