Have the Annals editors added value?

The only real utility of any journal today is judging the value of submissions and improving them whenever possible. There are plenty of places to publish research—a journal for every article—so it is not enough to provide an outlet for work in the field. Rather, the journal’s utility is defined by the editorial process and peer review, and the standards that they raise for investigators. When a reader picks up a copy of the Annals, looks through our table of contents online, or sees an Annals article in a list of PubMed output, we hope that the standards we apply will encourage a deeper dive. This increases exposure of the work and, by extension, the impact of our publishing authors. As we come to the end of our term, we have been asking ourselves, “How good a job have we done?” Has your trust in us been well placed? Can we be proud of our work as an editorial team? Unfortunately (or perhaps fortunately), the data to address these critical questions are very limited. Citations are often accepted as the fundamental currency of individual articles and journals. Faculty promotion committees are increasingly scrutinizing article citations and the h index, a measure of the number of an investigator’s articles with that number or greater citations, has become a common metric used to compare career researcher productivity. Journals themselves tout their Impact Factors, based on article citations, as primary proof of prestige, as can been seen by the flurry of press releases after the annual announcement of the updated numbers. By the Impact Factor metric, we seem to deserve a pat on the back. Although not a monotonic increase, we have gained in the last 10 years while continuing to publish a large number of research articles and few review articles, which tend to be cited more frequently (Figure 1). The clinical neurosciences have also gained more robust funding during that time so some of this gain may simply be due to more articles published in our field generating more citations for us and for all other neuroscience journals. However, other journals in our field have not seen gains in the last decade. Furthermore, another measure of relative article impact, the Article Influence Score, has also climbed steadily since its introduction in 2007 (Figure 1) and this is standardized with respect to other journals. Of course, one of the reasons the Annals can publish impactful articles is that it receives great manuscripts from investigators. Perceived prestige drives submissions in a sort of self-fulfilling prophecy. Riding the wave of prestige is certainly easy for any editorial team, so we have not congratulated ourselves much for our Impact Factor. However, looking into articles that we have accepted, we asked a slightly different question. Are we actually good judges of impact? For this we did two studies, one prospective and one retrospective. In the prospective study, each editor was asked at the time of submission to judge whether a manuscript was likely to be high-impact. Such manuscripts were notated and the remaining steps of the review process progressed as usual. Between January 2008 and July 2013, 34 submissions were noted to be potentially high-impact, of which four have not yet been published and seven were rejected after outside review. Among the 23 ultimately published in the Annals, 11 were cited more than average for the month of publication, and the average standardized citation rate was similar between those rated “high-impact” and others (p50.41). Thus, in this experiment, there was no evidence the editors could select the high-impact articles FIGURE 1: The current editorial board began its tenure in October 2005 and since then Impact Factor and Article Influence indices reflect steady gains. The Article Influence Score was first introduced in 2007.