Ranking scientists and departments in a consistent manner

The standard data that we use when computing bibliometric rankings of scientists are their publication/ citation records, i.e., so many papers with 0 citation, so many with 1 citation, so many with 2 citations, etc. The standard data for bibliometric rankings of departments have the same structure. It is therefore tempting (and many authors gave in to temptation) to use the same method for computing rankings of scientists and rankings of departments. Depending on the method, this can yield quite surprising and unpleasant results. Indeed, with some methods, it may happen that the “best” department contains the “worst” scientists, and only them. This problem will not occur if the rankings satisfy a property called consistency, recently introduced in the literature. In this article, we explore the consequences of consistency and we characterize two families of consistent rankings. © 2011 Wiley Periodicals, Inc.

[1]  A. Tversky,et al.  Foundations of Measurement, Vol. I: Additive and Polynomial Representations , 1991 .

[2]  L. Egghe An improvement of the h-index: the g-index , 2006 .

[3]  A. Kinney National scientific facilities and their science impact on nonbiomedical research , 2007, Proceedings of the National Academy of Sciences.

[4]  T. Coupé REVEALED PERFORMANCES: WORLDWIDE RANKINGS OF ECONOMISTS AND ECONOMICS DEPARTMENTS, 1990–2000 , 2003 .

[5]  Anthony F. J. van Raan Comparison of the Hirsch-index with standard bibliometric indicators and with peer judgment for 147 chemistry research groups , 2013, Scientometrics.

[6]  Thierry Marchant,et al.  Consistent bibliometric rankings of authors and of journals , 2010, J. Informetrics.

[7]  L. Egghe,et al.  Theory and practise of the g-index , 2006, Scientometrics.

[8]  J. E. Hirsch,et al.  An index to quantify an individual's scientific research output , 2005, Proc. Natl. Acad. Sci. USA.

[9]  András Schubert,et al.  Successive h-indices , 2007, Scientometrics.

[10]  M. Kosmulski A new Hirsch-type index saves time and works equally well as the original h-index , 2009 .

[11]  Robin K. S. Hankin,et al.  Beyond the Durfee square: Enhancing the h-index to score total publication output , 2008, Scientometrics.

[12]  Thierry Marchant,et al.  Bibliometric rankings of journals based on Impact Factors: An axiomatic approach , 2011, J. Informetrics.

[13]  Thierry Marchant,et al.  An axiomatic characterization of the ranking based on the h-index and some other bibliometric rankings of authors , 2009, Scientometrics.

[14]  Thed N. van Leeuwen,et al.  Towards a new crown indicator: Some theoretical considerations , 2010, J. Informetrics.

[15]  Ludo Waltman,et al.  A Taxonomy of Bibliometric Performance Indicators Based on the Property of Consistency , 2009 .

[16]  R. Dusansky,et al.  Rankings of U.S. Economics Departments , 1998 .

[17]  R. Rousseau,et al.  The R- and AR-indices: Complementing the h-index , 2007 .

[18]  Ricardo Arencibia Jorge,et al.  Applying successive H indices in the institutional evaluation: A case study , 2008, J. Assoc. Inf. Sci. Technol..

[19]  R. Luce Utility of Gains and Losses: Measurement-Theoretical and Experimental Approaches , 2000 .

[20]  Ludo Waltman,et al.  Generalizing the H- and G-Indices , 2008, J. Informetrics.

[21]  Thierry Marchant Score-based bibliometric rankings of authors , 2009 .

[22]  Greg Traxler,et al.  Ranking Agricultural Economics Departments by AJAE Page Counts: A Reappraisal , 1994, Agricultural and Resource Economics Review.