Remaining problems with the "New Crown Indicator" (MNCS) of the CWTS

In their article, entitled "Towards a new crown indicator: some theoretical considerations," Waltman et al. (2010; at arXiv:1003.2167) show that the "old crown indicator" of CWTS in Leiden was mathematically inconsistent and that one should move to the normalization as applied in the "new crown indicator." Although we now agree about the statistical normalization, the "new crown indicator" inherits the scientometric problems of the "old" one in treating subject categories of journals as a standard for normalizing differences in citation behavior among fields of science. We further note that the "mean" is not a proper statistics for measuring differences among skewed distributions. Without changing the acronym of "MNCS," one could define the "Median Normalized Citation Score." This would relate the new crown indicator directly to the percentile approach that is, for example, used in the Science and Engineering Indicators of US National Science Board (2010). The median is by definition equal to the 50th percentile. The indicator can thus easily be extended with the 1% (= 99th percentile) most highly-cited papers (Bornmann et al., in press). The seeming disadvantage of having to use non-parametric statistics is more than compensated by possible gains in the precision.

[1]  Reinier Plomp,et al.  The significance of the number of highly cited papers as an indicator of scientific prolificacy , 1990, Scientometrics.

[2]  Alexander I. Pudovkin,et al.  Percentile Rank and Author Superiority Indexes for Evaluating Individual Journal Articles and the Author’s Overall Citation Performance , 2009 .

[3]  Loet Leydesdorff,et al.  Normalization at the field level: fractional counting of citations , 2010, J. Informetrics.

[4]  Ismael Rafols,et al.  Content-based and algorithmic classifications of journals: Perspectives on the dynamics of scientific communication and indexer effects , 2009 .

[5]  Alexander I. Pudovkin,et al.  Algorithmic procedure for finding semantically related journals , 2002, J. Assoc. Inf. Sci. Technol..

[6]  Daryl E. Chubin,et al.  Is citation analysis a legitimate evaluation tool? , 1979, Scientometrics.

[7]  James G. Corrigan,et al.  Programmatic evaluation and comparison based on standardized citation scores , 1983, IEEE Transactions on Engineering Management.

[8]  Loet Leydesdorff,et al.  of Science , 2022 .

[9]  Michel Zitt,et al.  Modifying the journal impact factor by fractional citation weighting: The audience factor , 2008, J. Assoc. Inf. Sci. Technol..

[10]  Lutz Bornmann,et al.  OPEN PEN ACCESS CCESS , 2008 .

[11]  Stephen J. Bensman,et al.  Definition and identification of journals as bibliographic and subject entities: Librarianship versus ISI Journal Citation Reports methods and their effect on citation measures , 2009 .

[12]  Loet Leydesdorff,et al.  Can scientific journals be classified in terms of aggregated journal-journal citation relations using the Journal Citation Reports? , 2009, J. Assoc. Inf. Sci. Technol..

[13]  Thed N. van Leeuwen,et al.  Rivals for the crown: Reply to Opthof and Leydesdorff , 2010, J. Informetrics.

[14]  Thed N. van Leeuwen,et al.  Towards a new crown indicator: Some theoretical considerations , 2010, J. Informetrics.

[15]  Loet Leydesdorff,et al.  Caveats for the journal and field normalizations in the CWTS ("Leiden") evaluations of research performance , 2010, J. Informetrics.

[16]  Loet Leydesdorff Can scientific journals be classified in terms of aggregated journal-journal citation relations using the Journal Citation Reports? , 2006 .

[17]  Loet Leydesdorff,et al.  Do Scientific Advancements Lean on the Shoulders of Giants? A Bibliometric Investigation of the Ortega Hypothesis , 2010, PloS one.

[18]  Kevin W. Boyack,et al.  Mapping the backbone of science , 2004, Scientometrics.

[19]  H. Moed CWTS crown indicator measures citation impact of a research group's publication oeuvre , 2010, J. Informetrics.