Problems with the SNIP indicator

As is well known, citation practices differ across academic fields, especially between the science, social science and arts and humanities domains. Thus, when using citations as a measure of research impact, whether for journals, individuals or departments/institutions, it is necessary to normalise the raw data to the general citation potential (Garfield, 1972, 1979) of the research area. Traditionally, this has been done by normalising to the number of citations generally in the field (“cited side” normalisation (Schubert & Braun, 1986)). The most widely used method, developed by CWTS (Moed, 2010a; van Raan, Moed, & van Leeuwen, 2007; van Raan, 2003, 2005), known as the Leiden methodology, compared citation rates per paper to the mean of such rates across a defined research field. Typically, the list of journals in the field was provided by Web of Science (Mingers and Lipitakis, 2013). More recently, an approach was developed (Moed, 2010b; Zitt & Small, 2008) that normalised against the mean number of references per citing paper in the field (“citing side” normalisation). It was also innovatory in not using a pre-defined definition of the relevant field. Rather, the journal’s1 subject field “is defined as the collection of papers citing that journal” (Moed, 2010b, p. 267). More specifically, the subject field is the set of papers that, in a particular year, cite at least one paper in the journal in the preceding ten years (p. 275).2 This approach is called “source-normalised impact per paper (SNIP). This approach was developed in conjunction with Elsevier and was implemented in their Scopus database for all journals contained in it. It is essentially a normalised impact factor for a journal and is becoming widely used.However, problems with this indicator were highlighted by Leydesdorff (2013), Leydesdorff and Opthof (2010a,b) (discussed below) and CWTS, which itself recognised other difficulties. This led to a revised version of SNIP being presented in 2013 (Waltman, van Eck, van Leeuwen, & Visser, 2013). The purpose of this letter is to point out problems with SNIP both in its revised form and the older form.

[1]  Loet Leydesdorff The revised SNIP indicator of Elsevier's Scopus , 2013, J. Informetrics.

[2]  Thed N. van Leeuwen,et al.  Some modifications to the SNIP journal impact indicator , 2012, J. Informetrics.

[3]  Loet Leydesdorff,et al.  Group‐based trajectory modeling (GBTM) of citations in scholarly literature: Dynamic qualities of “transient” and “sticky knowledge claims” , 2013, J. Assoc. Inf. Sci. Technol..

[4]  Daryl E. Chubin,et al.  Is citation analysis a legitimate evaluation tool? , 1979, Scientometrics.

[5]  Vincent Larivière,et al.  The place of serials in referencing practices: Comparing natural sciences and engineering with social sciences and humanities , 2006, J. Assoc. Inf. Sci. Technol..

[6]  John Mingers,et al.  Evaluating a department's research: Testing the Leiden methodology in business and management , 2013, Inf. Process. Manag..

[7]  John Mingers,et al.  Counting the citations: a comparison of Web of Science and Google Scholar in the field of business and management , 2010, Scientometrics.

[8]  Tibor Braun,et al.  Relative indicators and relational charts for comparative assessment of publication output and citation impact , 1986, Scientometrics.

[9]  Loet Leydesdorff,et al.  Caveats for the journal and field normalizations in the CWTS ("Leiden") evaluations of research performance , 2010, J. Informetrics.

[10]  E. Garfield Citation analysis as a tool in journal evaluation. , 1972, Science.

[11]  Loet Leydesdorff,et al.  Remaining problems with the "New Crown Indicator" (MNCS) of the CWTS , 2010, J. Informetrics.

[12]  Henk F. Moed,et al.  Measuring contextual citation impact of scientific journals , 2009, J. Informetrics.

[13]  L. Leydesdorff Caveats for the use of citation indicators in research and journal evaluations , 2008 .

[14]  John Mingers,et al.  Exploring the dynamics of journal citations: Modelling with s-curves , 2008, J. Oper. Res. Soc..

[15]  H. Small,et al.  Modifying the journal impact factor by fractional citation weighting: The audience factor , 2008 .

[16]  Loet Leydesdorff,et al.  Scopus' SNIP indicator: Reply to Moed , 2011, J. Assoc. Inf. Sci. Technol..

[17]  A. Raan The use of bibliometric analysis in research performance assessment and monitoring of interdisciplinary scientific developments , 2003 .

[18]  H. Moed CWTS crown indicator measures citation impact of a research group's publication oeuvre , 2010, J. Informetrics.

[19]  Anthony F. J. van Raan,et al.  Fatal attraction: Conceptual and methodological problems in the ranking of universities by bibliometric methods , 2005, Scientometrics.