Can we do better than existing author citation metrics?

Throughout the world, research bodies have begun to quantify research quality through citation analysis. Both Britain [1] and Australia [2] now incorporate a bibliometric element when assessing institutional research output. The Institute for Research Information and Quality Assurance was recently established with the core goal of evaluating research performance funded by the German Research Foundation. In China [3], authors have been asked to publish only in journals indexed in the Institute for Scientific Information (ISI) Science Citation Index and which therefore receive an Impact Factor. Some individual institutions have even more specific requirements [4]. With so many researchers and specialised disciplines, it is tempting for funding agencies and governments to reduce an author’s publication career to a single metric. It is also spectacularly difficult [5]. When attempting to follow Professor Hopper’s advice to take one measurement over a thousand expert opinions, the challenge is in making that one measurement accurate. To properly measure author impact, such a metric would need to have a number of important attributes, many of them involving some complex statistics [6]. We may consider the following a simplified list of some of the most crucial criteria. First, the metric would have to be unambiguous, so that two calculations of the metric could not reach different results and so that only data for the author under study are included. Second, it would need to fairly compare authors from different subjects or countries and publishing different types of papers. Third, it would need to take account of time – both the age of the articles and the length of the author’s publication career. Fourth, the metric must be easily calculated, particularly if it is to be systematically generated for large numbers of authors. Current methods of citation analysis based on this system fail our second criterion, because case studies, practitioner and clinical articles receive a lower level of citation that does not reflect their actual usefulness to the academic community [7]. Additionally, our two citation indices, Web of Science and Scopus, both index a higher proportion of English-language journals than foreign language, skewing results in favour of the West and towards the US and UK in particular [8]. But the problem is with more than the data: if we look at the author metrics currently in use, they are found to be lacking in other respects.

[1]  P. Lawrence Lost in publication: how measurement harms science , 2008 .

[2]  Matthew E Falagas,et al.  Comparison of PubMed, Scopus, Web of Science, and Google Scholar: strengths and weaknesses , 2007, FASEB journal : official publication of the Federation of American Societies for Experimental Biology.

[3]  Lutz Bornmann,et al.  Do we need the h index and its variants in addition to standard bibliometric measuresq , 2009 .

[4]  A. Kulkarni,et al.  Comparisons of citations in Web of Science, Scopus, and Google Scholar for articles published in general medical journals. , 2009, JAMA.

[5]  Anne-Wil Harzing,et al.  Google Scholar as a new source for citation analysis , 2008 .

[6]  R. Rousseau,et al.  The R- and AR-indices: Complementing the h-index , 2007 .

[7]  Linda Butler,et al.  Using a balanced approach to bibliometrics: quantitative performance measures in the Australian Research Quality Framework , 2008 .

[8]  Lutz Bornmann,et al.  What do we know about the h index? , 2007, J. Assoc. Inf. Sci. Technol..

[9]  Loet Leydesdorff,et al.  Scopus's Source Normalized Impact per Paper (SNIP) versus a Journal Impact Factor based on Fractional Counting of Citations , 2010, J. Assoc. Inf. Sci. Technol..

[10]  Lokman I. Meho,et al.  A New Era in Citation and Bibliometric Analyses: Web of Science, Scopus, and Google Scholar , 2006, ArXiv.

[11]  Per O. Seglen,et al.  The Skewness of Science , 1992, J. Am. Soc. Inf. Sci..

[12]  Lokman I. Meho,et al.  Impact of data sources on citation counts and rankings of LIS faculty: Web of science versus scopus and google scholar , 2007 .

[13]  J. E. Hirsch,et al.  An index to quantify an individual's scientific research output , 2005, Proc. Natl. Acad. Sci. USA.

[14]  D. Pendlebury The use and misuse of journal metrics and other citation indicators , 2009, Archivum Immunologiae et Therapiae Experimentalis.

[15]  Thed N. van Leeuwen,et al.  Towards a new crown indicator: Some theoretical considerations , 2010, J. Informetrics.

[16]  Qais Al-Awqati Impact factors and prestige. , 2007, Kidney international.

[17]  Yannis Manolopoulos,et al.  Generalized Hirsch h-index for disclosing latent facts in citation networks , 2007, Scientometrics.

[18]  Yannis Manolopoulos,et al.  Generalized h-index for Disclosing Latent Facts in Citation Networks , 2006, ArXiv.

[19]  L. Egghe An improvement of the h-index: the g-index , 2006 .

[20]  Carl T. Bergstrom,et al.  Differences in impact factor across fields and over time , 2008, J. Assoc. Inf. Sci. Technol..

[21]  Lutz Bornmann,et al.  Are there better indices for evaluation purposes than the h index? A comparison of nine different variants of the h index using data from biomedicine , 2008, J. Assoc. Inf. Sci. Technol..

[22]  Henk F. Moed,et al.  Measuring contextual citation impact of scientific journals , 2009, J. Informetrics.

[23]  Lutz Bornmann,et al.  Do we need the h index and its variants in addition to standard bibliometric measures? , 2009, J. Assoc. Inf. Sci. Technol..

[24]  Michel Zitt,et al.  Modifying the journal impact factor by fractional citation weighting: The audience factor , 2008, J. Assoc. Inf. Sci. Technol..

[25]  M. Kosmulski A new Hirsch-type index saves time and works equally well as the original h-index , 2009 .

[26]  Is the journal impact factor a valid indicator of scientific value? , 2009, Singapore medical journal.

[27]  P. Perakakis,et al.  The siege of science , 2008 .

[28]  Kieron Flanagan,et al.  A Comparative Study of the Purchase, Management and Use of Large-scale Research Equipment in the UK and US Universities (report for Evidence Ltd on behalf of the Higher Education Funding Council for England) , 2002 .

[29]  Alexander I. Pudovkin,et al.  Percentile Rank and Author Superiority Indexes for Evaluating Individual Journal Articles and the Author’s Overall Citation Performance , 2009 .