The impact factor follies.
暂无分享,去创建一个
A rgument by analogy poses problems for philosophers,1 but the rest of us love it. It lets us question the analogy and avoid the issue. Hernan's2 argument, for example, is a bit hard to follow, and the congruence between the numerators (a published article vs. a case) and the denominators (all articles published in a journal vs. all cases occurring at a facility) seems somehow inexact. But no matter. The analogy may be arguable, but the point is clear: epidemiology and good sense dictate that numerators and denominators be of the same logical type. The frequency with which articles in a journal are cited, divided by the number of articles the journal publishes, is a ratio whose interpretation is dicey. A large ratio can result from a large numerator or a small denomina tor, and vice versa. The defects in the impact factor are legion (aberrations in counting, citation of nonresearch articles, gaming the system, inappropriate use for academic promotion, etc.).3'4 Rectifying the fraction by making numerator and denominator congruent will not solve the problem. As Douglas Altman has pointed out in an ongoing conversation on the listserve of the World Asso ciation of Medical Editors (WAME-L@LIST.NIH.GOV), the impact factor does not measure quality, but rather the frequency of citation-not at all the same thing. To use an analogy with social network analysis, the IF measures centrality, the extent to which a "node" (the journal in this case) is connected to others. Judgments about that centrality (prominence, importance, influ ence) are revealed only with a lot more information about those connections. These arguments have been around for some time, so why is the measure still with us? Tried and true? Tested by time? Corporate control? Faute de mieux? No. I would like to suggest that we have not abandoned it because, warts and all, it works for classifying journals. The big journals have large impact factors, and the lesser journals, smaller ones. Like it or not, the impact factor reflects, with occasional miscarriages, a pecking order that we all recognize. The New England Jour nal of Medicine has a higher impact than the 3 epidemiology journals that Hernan cites. Within the microcosm of epide miology, those 3 journals have a higher impact than the one I edit. But more important, those 3 are clearly not very different from each other (small variation in impact factor notwithstanding), and represent first-tier journals within the field. The one I edit shares the second tier (impact factors in the 2's) with a number of others-an order we all recognize. But if the impact factor simply designates the obvious, why bother with it? For editors, publishers, and sponsors, it provides categories for qualitative judgment, and permits appraisal of change (I have gone from "1" to "2"). We should jettison its cloak of pseudoquantitation: fix the numerous distortions, but abandon the 3 significant digits, and leave only the truncated integer (no rounding up, please). A jour nal's number would then better reflect the tier, and meretri tious microdistinctions can be avoided. A journal can be judged by the company it keeps and the company it strives for. To use a final analogy, like the philosopher Mr. Ramsey who managed to get to "Q" in his thinking,5 perhaps I can get to "3."
[1] Miguel A Hernán,et al. Epidemiologists (of all people) should question journal impact factors. , 2008, Epidemiology.
[2] André Juthe. Argument by Analogy , 2005 .
[3] Mike Rossner,et al. Show me the data , 2007, The Journal of cell biology.
[4] P. Seglen,et al. Education and debate , 1999, The Ethics of Public Health.