Four pitfalls in normalizing citation indicators: An investigation of ESI's selection of highly cited papers

InCites Essential Science Indicators is becoming increasingly used to identify top-performing research and evaluate the impact of institutes. Unfortunately, our study shows that ESI indicators, as well as other normalized citation indicators, have the following flaws. First, the publication month and the online-to-print delay affect a paper’s probability of becoming a Highly Cited Paper (HCP). Papers published in the earlier months of the year are more likely to accumulate enough citation counts to rank at the top 1% compared with those published in later months of the year. Papers with longer online-to-print delays have an apparent advantage for being selected as HCPs. Research field normalizations lead to the third pitfall. Different research fields have different citation thresholds for HCPs, making research field classification important for a journal. In addition, the uniform thresholds for both articles and reviews in ESI affect the reliability of HCP selection because, on average, reviews tend to have higher citation rates than articles. ESI’s selection of HCPs provides an intuitive feel for the problems of normalized citation impact indicators, such as those provided in InCites and SciVal.

[1]  Ludo Waltman,et al.  Field-normalized citation impact indicators using algorithmically constructed classification systems of science , 2015, J. Informetrics.

[2]  András Schubert,et al.  Hirsch-index for countries based on Essential Science Indicators data , 2007, Scientometrics.

[3]  José Antonio Cordón García,et al.  The influence of online posting dates on the bibliometric indicators of scientific articles , 2018, ArXiv.

[4]  Mike Thelwall,et al.  Distributions for cited articles from individual subjects and years , 2014, J. Informetrics.

[5]  Mike Thelwall,et al.  A combined bibliometric indicator to predict article impact , 2011, Inf. Process. Manag..

[6]  Jian Wang,et al.  Citation time window choice for research impact evaluation , 2013, Scientometrics.

[7]  Thed N. van Leeuwen,et al.  Towards a new crown indicator: Some theoretical considerations , 2010, J. Informetrics.

[8]  Mike Thelwall,et al.  Which factors help authors produce the highest impact research? Collaboration, journal and document properties , 2013, J. Informetrics.

[9]  Rodrigo Costas,et al.  When is an article actually published? An analysis of online availability, publication, and indexation dates , 2015, ISSI.

[10]  Yuh-Shan Ho,et al.  High-impact papers presented in the subject category of water resources in the essential science indicators database of the institute for scientific information , 2011, Scientometrics.

[11]  Loet Leydesdorff,et al.  Caveats for the journal and field normalizations in the CWTS ("Leiden") evaluations of research performance , 2010, J. Informetrics.

[12]  Olavo B. Amaral,et al.  Rising Publication Delays Inflate Journal Impact Factors , 2012, PloS one.

[13]  Ludo Waltman,et al.  A review of the literature on citation impact indicators , 2015, J. Informetrics.

[14]  Ludo Waltman,et al.  A new methodology for constructing a publication-level classification system of science , 2012, J. Assoc. Inf. Sci. Technol..

[15]  Ludo Waltman,et al.  Field normalization of scientometric indicators , 2018, Springer Handbook of Science and Technology Indicators.

[16]  Guang Yu,et al.  The influence of publication delays on impact factors , 2005, Scientometrics.

[17]  José-Antonio Cordón-García,et al.  Influencia de la fecha de publicación online de los artículos científicos en los indicadores bibliométricos , 2017 .

[18]  Javier Ruiz-Castillo,et al.  Sub-Field Normalization in the Multiplicative Case: Average-Based Citation Indicators , 2011, J. Informetrics.

[19]  Umut Al,et al.  Publication lag and early view effects in information science journals , 2017, Aslib J. Inf. Manag..

[20]  Anne-Wil Harzing,et al.  Health warning: might contain multiple personalities—the problem of homonyms in Thomson Reuters Essential Science Indicators , 2015, Scientometrics.

[21]  Ludo Waltman,et al.  A systematic empirical comparison of different approaches for normalizing citation impact indicators , 2013, J. Informetrics.

[22]  W. Glänzel Seven Myths in Bibliometrics About facts and fiction in quantitative science studies , 2008 .

[23]  Henk F. Moed,et al.  Measuring contextual citation impact of scientific journals , 2009, J. Informetrics.

[24]  Javier Ruiz-Castillo,et al.  The comparison of normalization procedures based on different classification systems , 2013, J. Informetrics.

[25]  Paul Donner Effect of publication month on citation impact , 2017, J. Informetrics.

[26]  Per Ahlgren,et al.  The effects and their stability of field normalization baseline on relative performance with respect to citation impact: A case study of 20 natural science departments , 2011, J. Informetrics.

[27]  Javier Ruiz-Castillo,et al.  The comparison of classification-system-based normalization procedures with source normalization alternatives in Waltman and Van Eck (2013) , 2014, J. Informetrics.

[28]  Loet Leydesdorff,et al.  The operationalization of “fields” as WoS subject categories (WCs) in evaluative bibliometrics: The cases of “library and information science” and “science & technology studies” , 2014, J. Assoc. Inf. Sci. Technol..

[29]  Yuh-Shan Ho,et al.  Characteristics of research in China assessed with Essential Science Indicators , 2011, Scientometrics.

[30]  Mike Thelwall,et al.  The influence of time and discipline on the magnitude of correlations between citation counts and quality scores , 2015, J. Informetrics.

[31]  Junping Qiu,et al.  Scientific research competitiveness of world universities in computer science , 2008, Scientometrics.