Alternatives to the journal impact factor: I3 and the top-10% (or top-25%?) of the most-highly cited papers

Journal impact factors (IFs) can be considered historically as the first attempt to normalize citation distributions by using averages over 2 years. However, it has been recognized that citation distributions vary among fields of science and that one needs to normalize for this. Furthermore, the mean—or any central-tendency statistics—is not a good representation of the citation distribution because these distributions are skewed. Important steps have been taken to solve these two problems during the last few years. First, one can normalize at the article level using the citing audience as the reference set. Second, one can use non-parametric statistics for testing the significance of differences among ratings. A proportion of most-highly cited papers (the top-10% or top-quartile) on the basis of fractional counting of the citations may provide an alternative to the current IF. This indicator is intuitively simple, allows for statistical testing, and accords with the state of the art.

[1]  Kevin W. Boyack,et al.  Mapping the backbone of science , 2004, Scientometrics.

[2]  Stephen J. Bensman,et al.  Definition and identification of journals as bibliographic and subject entities: Librarianship versus ISI Journal Citation Reports methods and their effect on citation measures , 2009 .

[3]  Anthony F. J. van Raan,et al.  Quasi-correspondence analysis on scientometric transaction matrices , 2005, Scientometrics.

[4]  B. Martin,et al.  Assessing Basic Research : Some Partial Indicators of Scientific Progress in Radio Astronomy : Research Policy , 1987 .

[5]  Loet Leydesdorff,et al.  Dynamic and evolutionary updates of classificatory schemes in scientific journal structures , 2002, J. Assoc. Inf. Sci. Technol..

[6]  Lutz Bornmann,et al.  Further steps towards an ideal method of measuring citation performance: The avoidance of citation (ratio) averages in field-normalization , 2011, J. Informetrics.

[7]  Thed N. van Leeuwen,et al.  New bibliometric tools for the assessment of national research performance: Database description, overview of indicators and first applications , 1995, Scientometrics.

[8]  Loet Leydesdorff,et al.  Betweenness centrality as a driver of preferential attachment in the evolution of research collaboration networks , 2011, J. Informetrics.

[9]  Loet Leydesdorff,et al.  Scopus's Source Normalized Impact per Paper (SNIP) versus a Journal Impact Factor based on Fractional Counting of Citations , 2010, J. Assoc. Inf. Sci. Technol..

[10]  Stephen J. Bensman The Structure of the Library Market for Scientific Journals: The Case of Chemistry , 1996 .

[11]  E. Garfield Citation analysis as a tool in journal evaluation. , 1972, Science.

[12]  Per O. Seglen,et al.  The Skewness of Science , 1992, J. Am. Soc. Inf. Sci..

[13]  Ismael Rafols,et al.  Content-based and algorithmic classifications of journals: Perspectives on the dynamics of scientific communication and indexer effects , 2008, J. Assoc. Inf. Sci. Technol..

[14]  Ismael Rafols,et al.  How journal rankings can suppress interdisciplinary research: A comparison between Innovation Stud , 2012 .

[15]  L. Leydesdorff,et al.  Dimensions of Citation Analysis , 1990 .

[16]  Yuko Fujigaki,et al.  Filling the gap between discussions on science and scientists' everyday activities: applying the autopoiesis system theory to scientific knowledge , 1998 .

[17]  C. B. Tilanus,et al.  Applied Economic Forecasting , 1966 .

[18]  Vincent Larivière,et al.  There are neither "king" nor "crown" in scientometrics: Comments on a supposed "alternative" method of normalization , 2011, J. Informetrics.

[19]  Loet Leydesdorff,et al.  Comparison of the Integrated Citation Impacts of Journals, Nations, and Institutions in the set journals of "Nanoscience & Nanotechnology" , 2011 .

[20]  Loet Leydesdorff,et al.  Can scientific journals be classified in terms of aggregated journal-journal citation relations using the Journal Citation Reports? , 2009, J. Assoc. Inf. Sci. Technol..

[21]  Henk F. Moed,et al.  Measuring contextual citation impact of scientific journals , 2009, J. Informetrics.

[22]  Thed N. van Leeuwen,et al.  Benchmarking international scientific excellence: Are highly cited research papers an appropriate frame of reference? , 2002, Scientometrics.

[23]  L. Leydesdorff Caveats for the use of citation indicators in research and journal evaluations , 2008 .

[24]  Loet Leydesdorff,et al.  The new Excellence Indicator in the World Report of the SCImago Institutions Rankings 2011 , 2011, J. Informetrics.

[25]  Claudio Castellano,et al.  Testing the fairness of citation indicators for comparison across scientific domains: The case of fractional citation counts , 2011, J. Informetrics.

[26]  Jonas Lundberg,et al.  Lifting the crown - citation z-score , 2007, J. Informetrics.

[27]  Loet Leydesdorff,et al.  Simple arithmetic versus intuitive understanding: The case of the impact factor , 2011 .

[28]  Loet Leydesdorff,et al.  Caveats for the Use of Citation Indicators in Research and Journal Evaluations , 2008, J. Assoc. Inf. Sci. Technol..

[29]  Patrick Doreian,et al.  Structural equivalence in a psychology journal network , 1985, J. Am. Soc. Inf. Sci..

[30]  Loet Leydesdorff,et al.  Which cities produce more excellent papers than can be expected? A new mapping approach, using Google Maps, based on statistical significance testing , 2011, J. Assoc. Inf. Sci. Technol..

[31]  Loet Leydesdorff,et al.  Turning the tables in citation analysis one more time: Principles for comparing sets of documents by using an “Integrated Impact Indicator” (I3) , 2011 .

[32]  LeydesdorffLoet Alternatives to the journal impact factor , 2012 .

[33]  Ronald Rousseau,et al.  Basic properties of both percentile rank scores and the I3 indicator , 2012, J. Assoc. Inf. Sci. Technol..

[34]  Loet Leydesdorff,et al.  The development of frames of references , 1986, Scientometrics.

[35]  Loet Leydesdorff,et al.  Fractional counting of citations in research evaluation: A cross- and interdisciplinary assessment of the Tsinghua University in Beijing , 2011, J. Informetrics.

[36]  Loet Leydesdorff,et al.  Clusters and Maps of Science Journals Based on Bi-connected Graphs in the Journal Citation Reports , 2009, ArXiv.

[37]  Thed N. van Leeuwen,et al.  Towards a new crown indicator: Some theoretical considerations , 2010, J. Informetrics.

[38]  E. Garfield The history and meaning of the journal impact factor. , 2006, JAMA.

[39]  Patrick Doreian,et al.  Structural equivalence in a journal network , 1985, J. Am. Soc. Inf. Sci..

[40]  Loet Leydesdorff,et al.  A rejoinder on energy versus impact indicators , 2011, Scientometrics.

[41]  P. Seglen,et al.  Education and debate , 1999, The Ethics of Public Health.

[42]  Loet Leydesdorff,et al.  Mapping (USPTO) Patent Data using Overlays to Google Maps , 2011, J. Assoc. Inf. Sci. Technol..

[43]  Eugene Garfield,et al.  Is citation analysis a legitimate evaluation tool? , 2005, Scientometrics.

[44]  Carl T. Bergstrom,et al.  Differences in impact factor across fields and over time , 2008, J. Assoc. Inf. Sci. Technol..

[45]  P. Wouters The citation culture , 1999 .

[46]  Jerome K. Vanclay,et al.  Impact factor: outdated artefact or stepping-stone to journal certification? , 2011, Scientometrics.

[47]  Loet Leydesdorff,et al.  Testing differences statistically with the Leiden ranking , 2011, Scientometrics.

[48]  D J PRICE,et al.  NETWORKS OF SCIENTIFIC PAPERS. , 1965, Science.

[49]  Michel Zitt,et al.  Modifying the journal impact factor by fractional citation weighting: The audience factor , 2008, J. Assoc. Inf. Sci. Technol..

[50]  Loet Leydesdorff,et al.  Tracking areas of strategic importance using scientometric journal mappings , 1994 .

[51]  Koenraad Debackere,et al.  Subfield-specific normalized relative indicators and a new generation of relational charts: Methodological foundations illustrated on the assessment of institutional research performance , 2008, Scientometrics.

[52]  Loet Leydesdorff,et al.  Theories of citation? , 1998, Scientometrics.

[53]  Loet Leydesdorff,et al.  Non-consistency, non-cited items, and the impact factor: A consequence of the arithmetic , 2011, ArXiv.

[54]  K. Boyack,et al.  Multiple dimensions of journal specificity: Why journals can’t be assigned to disciplines , 2013 .

[55]  Loet Leydesdorff,et al.  Caveats for the journal and field normalizations in the CWTS ("Leiden") evaluations of research performance , 2010, J. Informetrics.

[56]  Loet Leydesdorff,et al.  Which cities produce excellent papers worldwide more than can be expected? A new mapping approach--using Google Maps--based on statistical significance testing , 2011, ArXiv.

[57]  Carl T. Bergstrom,et al.  Differences in impact factor across fields and over time , 2009 .

[58]  Tibor Braun,et al.  Relative indicators and relational charts for comparative assessment of publication output and citation impact , 1986, Scientometrics.

[59]  Thed N. van Leeuwen,et al.  Rivals for the crown: Reply to Opthof and Leydesdorff , 2010, J. Informetrics.

[60]  Paul Wouters,et al.  The signs of science , 2006, Scientometrics.

[61]  Loet Leydesdorff,et al.  of Science , 2022 .

[62]  Amy M. Hightower,et al.  Science and Engineering Indicators , 1993 .

[63]  Loet Leydesdorff,et al.  Integrated Impact Indicators (I3) compared with Impact Factors (IFs): An alternative research design with policy implications , 2011, J. Assoc. Inf. Sci. Technol..