The journal Impact Factor and alternative metrics

Journal impact factors (JIFs) have become a widely used tool to judge the quality of scientific journals and single publications. JIFs are calculated by the scientific division of Thomson Reuters and published annually in the Journal Citation Reports (JCR). At first, the JCR's origin was guided by the needs of librarians who wanted to use a quantitative method to select journals for their holdings. Approximately 11,000 academic journals are currently listed in the JCR and the JIF has become one of the most important indicators in evaluative bibliometrics. Although this metric was never designed for evaluating papers or individuals, rather for evaluating journals as a whole, the availability of the JIFs has turned it into a common tool for evaluating research. It is especially common in Europe to use JIFs as a basis for making decision on research grants, hiring, and salaries. However, JIFs are not statistically representative of individual papers and correlate poorly with their actual citations. A study of six economics journals showed “that the best article in an issue of a good to medium‐quality journal routinely goes on to have much more citations impact than a ‘poor’ article published in an issue of a more prestigious journal” [1]. There is a growing unease within the scientific community, among journal publishers and within funding agencies, that the widespread misuse of JIFs to measure the quality of research—with profound impact on researchers' careers—is detrimental for science itself. The San Francisco Declaration on Research Assessment (DORA), initiated by the American Society for Cell Biology together with editors and publishers, calls for moving away from using JIFs to evaluate individual scientists or research groups and developing more reliable ways to measure the quality and impact of research. Various funding agencies have also begun to discourage the use of JIFs …