A comparison of the Web of Science and publication-level classification systems of science

In this paper we propose a new criterion for choosing between a pair of classification systems of science that assign publications (or journals) to a set of clusters. Consider the standard target (citedside) normalization procedure in which cluster mean citations are used as normalization factors. We recommend system A over system B whenever the standard normalization procedure based on system A performs better than the standard normalization procedure based on system B. Performance is assessed in terms of two double tests &-one graphical, and one numerical&- that use both classification systems for evaluation purposes. In addition, a pair of classification systems is compared using a third, independent classification system for evaluation purposes. We illustrate this strategy by comparing a Web of Science journal-level classification system, consisting of 236 journal subject categories, with two publication-level algorithmically constructed classification systems consisting of 1,363 and 5,119 clusters. There are two main findings. Firstly, the second publication-level system is found to dominate the first. Secondly, the publication-level system at the highest granularity level and the Web of Science journal-level system are found to be non-comparable. Nevertheless, we find reasons to recommend the publication-level option.

[1]  Anthony F. J. van Raan,et al.  Universality of citation distributions revisited , 2011, J. Assoc. Inf. Sci. Technol..

[2]  Ludo Waltman,et al.  Field-normalized citation impact indicators using algorithmically constructed classification systems of science , 2015, J. Informetrics.

[3]  Claudio Castellano,et al.  Universality of citation distributions: Toward an objective measure of scientific impact , 2008, Proceedings of the National Academy of Sciences.

[4]  Thed N. van Leeuwen,et al.  Redefining the field of economics: Improving field normalization for the application of bibliometric techniques in the field of economics , 2012 .

[5]  Javier Ruiz-Castillo,et al.  The Measurement of Low- and High-Impact in Citation Distributions: Technical Results , 2011, J. Informetrics.

[6]  Antonio Perianes-Rodríguez,et al.  A comparison of two ways of evaluating research units working in different scientific fields , 2015, Scientometrics.

[7]  Claudio Castellano,et al.  A Reverse Engineering Approach to the Suppression of Citation Biases Reveals Universal Properties of Citation Distributions , 2012, PloS one.

[8]  Javier Ruiz-Castillo,et al.  The comparison of classification-system-based normalization procedures with source normalization alternatives in Waltman and Van Eck (2013) , 2014, J. Informetrics.

[9]  Daniel Sirtes,et al.  Finding the Easter eggs hidden by oneself: Why Radicchi and Castellano's (2012) fairness test for citation indicators is not fair , 2012, J. Informetrics.

[10]  Javier Ruiz-Castillo,et al.  The effect on citation inequality of differences in citation practices at the web of science subject category level , 2014, J. Assoc. Inf. Sci. Technol..

[11]  Kevin W. Boyack,et al.  Which Type of Citation Analysis Generates the Most Accurate Taxonomy of Scientific and Technical Knowledge? , 2015, J. Assoc. Inf. Sci. Technol..

[12]  Henry Small Visualizing science by citation mapping , 1999 .

[13]  L. Bornmann,et al.  How good is research really? , 2013, EMBO reports.

[14]  Ludo Waltman,et al.  A new methodology for constructing a publication-level classification system of science , 2012, J. Assoc. Inf. Sci. Technol..

[15]  Kevin W. Boyack,et al.  Mapping the backbone of science , 2004, Scientometrics.

[16]  Loet Leydesdorff,et al.  Top-down decomposition of the Journal Citation Reportof the Social Science Citation Index: Graph- and factor-analytical approaches , 2004, Scientometrics.

[17]  Loet Leydesdorff,et al.  Do Scientific Advancements Lean on the Shoulders of Giants? A Bibliometric Investigation of the Ortega Hypothesis , 2010, PloS one.

[18]  Ludo Waltman,et al.  A systematic empirical comparison of different approaches for normalizing citation impact indicators , 2013, J. Informetrics.

[19]  J. Ruiz-Castillo,et al.  The Measurement of the Effect on Citation Inequality of Differences in Citation Practices across Scientific Fields , 2013, PloS one.

[20]  Javier Ruiz-Castillo,et al.  The comparison of normalization procedures based on different classification systems , 2013, J. Informetrics.

[21]  Thed N. van Leeuwen,et al.  Benchmarking international scientific excellence: Are highly cited research papers an appropriate frame of reference? , 2002, Scientometrics.

[22]  Ludo Waltman,et al.  Source normalized indicators of citation impact: an overview of different approaches and an empirical comparison , 2012, Scientometrics.

[23]  Pedro Albarrán,et al.  The skewness of science in 219 sub-fields and a number of aggregates , 2010, Scientometrics.

[24]  Anthony F. J. van Raan,et al.  Citation Analysis May Severely Underestimate the Impact of Clinical Research as Compared to Basic Research , 2012, PloS one.

[25]  Christoph Neuhaus,et al.  A new reference standard for citation analysis in chemistry and related fields based on the sections of Chemical Abstracts , 2009 .

[26]  V. Larivière,et al.  Design and Update of a Classification System: The UCSD Map of Science , 2012, PloS one.

[27]  Michal Brzezinski,et al.  Power laws in citation distributions: evidence from Scopus , 2014, Scientometrics.

[28]  Loet Leydesdorff,et al.  The operationalization of “fields” as WoS subject categories (WCs) in evaluative bibliometrics: The cases of “library and information science” and “science & technology studies” , 2014, J. Assoc. Inf. Sci. Technol..

[29]  Mike Thelwall,et al.  Distributions for cited articles from individual subjects and years , 2014, J. Informetrics.

[30]  Qi Wang,et al.  Large-scale analysis of the accuracy of the journal classification systems of Web of Science and Scopus , 2015, J. Informetrics.

[31]  Filippo Radicchi,et al.  Quantitative evaluation of alternative field normalization procedures , 2013, J. Informetrics.

[32]  D. Aksnes CHARACTERISTICS OF HIGHLY CITED PAPERS , 2003 .

[33]  Per Ottar Seglen,et al.  The skewness of science , 1992 .

[34]  Michel Zitt,et al.  Relativity of citation performance and excellence measures: From cross-field to cross-scale effects of field-normalisation , 2005, Scientometrics.

[35]  Loet Leydesdorff Can scientific journals be classified in terms of aggregated journal-journal citation relations using the Journal Citation Reports? , 2006 .