Aggregating productivity indices for ranking researchers across multiple areas

The impact of scientific research has traditionally been quantified using productivity indices such as the well-known h-index. On the other hand, different research fields---in fact, even different research areas within a single field---may have very different publishing patterns, which may not be well described by a single, global index. In this paper, we argue that productivity indices should account for the singularities of the publication patterns of different research areas, in order to produce an unbiased assessment of the impact of scientific research. Inspired by ranking aggregation approaches in distributed information retrieval, we propose a novel approach for ranking researchers across multiple research areas. Our approach is generic and produces cross-area versions of any global productivity index, such as the volume of publications, citation count and even the h-index. Our thorough evaluation considering multiple areas within the broad field of Computer Science shows that our cross-area indices outperform their global counterparts when assessed against the official ranking produced by CNPq, the Brazilian National Research Council for Scientific and Technological Development. As a result, this paper contributes a valuable mechanism to support the decisions of funding bodies and research agencies, for example, in any research assessment effort.

[1]  Lutz Bornmann,et al.  Universality of citation distributions–A validation of Radicchi et al.'s relative indicator c f = c-c 0 at the micro level using data from chemistry , 2009 .

[2]  Claudio Castellano,et al.  Universality of citation distributions: Toward an objective measure of scientific impact , 2008, Proceedings of the National Academy of Sciences.

[3]  Marcos André Gonçalves,et al.  An unsupervised heuristic-based hierarchical method for name disambiguation in bibliographic citations , 2010 .

[4]  Jamie Callan,et al.  DISTRIBUTED INFORMATION RETRIEVAL , 2002 .

[5]  Carl T. Bergstrom,et al.  Differences in impact factor across fields and over time , 2009 .

[6]  João Claro,et al.  A made-to-measure indicator for cross-disciplinary bibliometric ranking of researchers performance , 2010, Scientometrics.

[7]  Jonas Lundberg,et al.  Lifting the crown - citation z-score , 2007, J. Informetrics.

[8]  Jaana Kekäläinen,et al.  Cumulated gain-based evaluation of IR techniques , 2002, TOIS.

[9]  Johan Bollen,et al.  A Principal Component Analysis of 39 Scientific Impact Measures , 2009, PloS one.

[10]  Mohammed J. Zaki,et al.  Lazy Associative Classification , 2006, Sixth International Conference on Data Mining (ICDM'06).

[11]  Eduardo A. Oliveira,et al.  Comparison of Brazilian researchers in clinical medicine: are criteria for ranking well-adjusted? , 2011, Scientometrics.

[12]  Rob J Hyndman,et al.  Sample Quantiles in Statistical Packages , 1996 .

[13]  Byung-Won On,et al.  Are your citations clean? , 2007, CACM.

[14]  Igor Podlubny,et al.  Comparison of scientific impact expressed by the number of citations in different fields of science , 2004, Scientometrics.

[15]  Lutz Bornmann,et al.  Are there better indices for evaluation purposes than the h index? A comparison of nine different variants of the h index using data from biomedicine , 2008, J. Assoc. Inf. Sci. Technol..

[16]  Lutz Bornmann,et al.  Universality of citation distributions-A validation of Radicchi et al.'s relative indicator cf = c/c0 at the micro level using data from chemistry , 2009, J. Assoc. Inf. Sci. Technol..

[17]  Marcos André Gonçalves,et al.  A brief survey of automatic methods for author name disambiguation , 2012, SGMD.

[18]  J. E. Hirsch,et al.  An index to quantify an individual's scientific research output , 2005, Proc. Natl. Acad. Sci. USA.

[19]  Fiorenzo Franceschini,et al.  Proposals for evaluating the regularity of a scientist’s research output , 2011, Scientometrics.

[20]  Thed N. van Leeuwen,et al.  Towards a new crown indicator: Some theoretical considerations , 2010, J. Informetrics.

[21]  Quentin L. Burrell,et al.  Hirsch's h-index: A stochastic model , 2007, J. Informetrics.

[22]  Simone Diniz Junqueira Barbosa,et al.  INTERACTING WITH PUBLIC POLICYAre HCI researchers an endangered species in Brazil? , 2011, INTR.

[23]  Amanda Spink,et al.  Real life, real users, and real needs: a study and analysis of user queries on the web , 2000, Inf. Process. Manag..

[24]  Daniel R. Figueiredo,et al.  Ranking in collaboration networks using a group based metric , 2011, Journal of the Brazilian Computer Society.

[25]  Wolfgang Glänzel,et al.  A new classification scheme of science fields and subfields designed for scientometric evaluation purposes , 2004, Scientometrics.

[26]  L. Egghe,et al.  Theory and practise of the g-index , 2006, Scientometrics.

[27]  Loet Leydesdorff,et al.  Alternatives to the journal impact factor: I3 and the top-10% (or top-25%?) of the most-highly cited papers , 2012, Scientometrics.

[28]  Mohammed J. Zaki,et al.  Multi-label Lazy Associative Classification , 2007, PKDD.