Revisiting the scaling of citations for research assessment

Over the past decade, national research evaluation exercises, traditionally conducted using the peer review method, have begun opening to bibliometric indicators. The citations received by a publication are assumed as proxy for its quality, but they require standardization prior to use in comparative evaluation of organizations or individual scientists: the citation data must be standardized, due to the varying citation behavior across research fields. The objective of this paper is to compare the effectiveness of the different methods of normalizing citations, in order to provide useful indications to research assessment practitioners. Simulating a typical national research assessment exercise, he analysis is conducted for all subject categories in the hard sciences and is based on the Thomson Reuters Science Citation Index-Expanded®. Comparisons show that the citations average is the most effective scaling parameter, when the average is based only on the publications actually cited.

[1]  Claudio Castellano,et al.  Rescaling citations of publications in Physics , 2010, Physical review. E, Statistical, nonlinear, and soft matter physics.

[2]  José R. Campanha,et al.  Power-law distributions for the citation index of scientific publications and scientists , 2005 .

[3]  米澤 彰純 評価からみた財政配分とのリンク : 英国Research Assessment Exerciseの発展と危機を中心に(発表・3,大学評価と資源配分,課題研究1,IV 大会報告) , 2007 .

[4]  J. E. Hirsch,et al.  An index to quantify an individual's scientific research output , 2005, Proc. Natl. Acad. Sci. USA.

[5]  Lutz Bornmann,et al.  Universality of citation distributions-A validation of Radicchi et al.'s relative indicator cf = c/c0 at the micro level using data from chemistry , 2009, J. Assoc. Inf. Sci. Technol..

[6]  Elizabeth S. Vieira,et al.  Citations to scientific articles: Its distribution and dependence on the article features , 2010, J. Informetrics.

[7]  Peter Vinkler,et al.  Evaluation of some methods for the relative assessment of scientific publications , 1986, Scientometrics.

[8]  Claudio Castellano,et al.  Universality of citation distributions: Toward an objective measure of scientific impact , 2008, Proceedings of the National Academy of Sciences.

[9]  Giovanni Abramo,et al.  Evaluating research: from informed peer review to bibliometrics , 2011, Scientometrics.

[10]  Steve Pressé,et al.  Nonuniversal power law scaling in the probability distribution of scientific citations , 2010, Proceedings of the National Academy of Sciences.

[11]  Jonas Lundberg,et al.  Lifting the crown - citation z-score , 2007, J. Informetrics.

[12]  Thed N. van Leeuwen,et al.  New bibliometric tools for the assessment of national research performance: Database description, overview of indicators and first applications , 1995, Scientometrics.

[13]  W. Glänzel Seven Myths in Bibliometrics About facts and fiction in quantitative science studies , 2008 .

[14]  Pedro Albarrán,et al.  The prevalence of power laws in the citations to scientific papers , 2009 .

[15]  Giovanni Abramo,et al.  A national-scale cross-time analysis of university research performance , 2011, Scientometrics.

[16]  Stephen Mungavin,et al.  Research Excellence Framework—some fundamental questions , 2009 .

[17]  D. Cox,et al.  An Analysis of Transformations , 1964 .