A commentary on misuses of the impact factor

When in 1955 Eugene Garfield conceived an index to measure scientific journals’ quality and guide decisions regarding the journals that would be covered by the Journal Citation Index, he did not consider its possible misuse [2]. “At the beginning it did not occur to me that impact would one day become the subject of widespread controversy. It has been misused in many situations, especially in the evaluation of individual researchers. The term “impact factor” (IF) has gradually evolved, especially in Europe, to mean both journal and author impact. This ambiguity often causes problems. The use of journal IF’s instead of actual article citation counts for evaluating authors is probably the most controversial issue.” [5]. Later, when Garfield realized the danger resulting from misunderstanding the IF, he used every occasion to warn against the misuses of the index, for example in the following text: “Journal impact data have been grafted on to certain large scale studies of university departments and even individuals. Sometimes a journal’s impact is used as a substitute for the evaluation of recently published articles simply because it takes several years for the average article to be cited. However, a small percentage of articles will experience almost immediate and high citation. Using the journal’s average citation impact instead of the actual article impact is tantamount to grading by the prestige of the journal involved. While expedient, it is dangerous. Although journal assessments are important, evaluation of faculty is a much more important exercise that affects individual careers. Impact numbers should not be used as surrogates except in unusual circumstances.” [3]. It was noted quite long ago that there is only a weak and casual correlation between the “citedness” of individual articles and the value of a journal’s IF [10–12]. Many people naively expect that every journal may be characterized by its distribution of citation numbers, which is quite narrow and proportional to the journal’s IF, so publication of an article in a journal of high IF automatically guarantees that it receives a large number of citations (see Fig.1). Hence the false assumption that a journal’s IF may be attributed to all articles within the journal and that it is a number useful in the evaluation of the individual authors. The real distribution of citations is, however, broad and very skewed. The number of citations in any journal, regardless of its IF value, exhibits an exponentially decreasing “background” and a “tail” of papers which have been cited many more times. It is a plausible hypothesis that the progress of science is due mainly (perhaps only?) to papers contributing to this “tail”. An example of real data is shown in Fig. 2, adapted from Redner [9]. Growing criticism of the use of journals’ IFs has been expressed by many authors [4, 6, 7, 8]. More recently, even the editors of journals have joined this criticism. An Editorial in Nature of June 23, 2005, included data showing conclusively that the IF is strongly influenced by a small minority of papers, and concluded that “Impact factors don’t tell us as much as some people think about the quality of the science that journals are publishing.” [1]. Thus there is convincing evidence that it makes little