Percentile Ranking and Citation Impact of a Large Cohort of National Heart, Lung, and Blood Institute–Funded Cardiovascular R01 Grants

Rationale: Funding decisions for cardiovascular R01 grant applications at the National Heart, Lung, and Blood Institute (NHLBI) largely hinge on percentile rankings. It is not known whether this approach enables the highest impact science. Objective: Our aim was to conduct an observational analysis of percentile rankings and bibliometric outcomes for a contemporary set of funded NHLBI cardiovascular R01 grants. Methods and Results: We identified 1492 investigator-initiated de novo R01 grant applications that were funded between 2001 and 2008 and followed their progress for linked publications and citations to those publications. Our coprimary end points were citations received per million dollars of funding, citations obtained <2 years of publication, and 2-year citations for each grant’s maximally cited paper. In 7654 grant-years of funding that generated $3004 million of total National Institutes of Health awards, the portfolio yielded 16 793 publications that appeared between 2001 and 2012 (median per grant, 8; 25th and 75th percentiles, 4 and 14; range, 0–123), which received 2 224 255 citations (median per grant, 1048; 25th and 75th percentiles, 492 and 1932; range, 0–16 295). We found no association between percentile rankings and citation metrics; the absence of association persisted even after accounting for calendar time, grant duration, number of grants acknowledged per paper, number of authors per paper, early investigator status, human versus nonhuman focus, and institutional funding. An exploratory machine learning analysis suggested that grants with the best percentile rankings did yield more maximally cited papers. Conclusions: In a large cohort of NHLBI-funded cardiovascular R01 grants, we were unable to find a monotonic association between better percentile ranking and higher scientific impact as assessed by citation metrics.

[1]  Stefan Sperlich,et al.  Generalized Additive Models , 2014 .

[2]  William H Press,et al.  Presidential address. What's so special about science (and how much should we spend on it?). , 2013, Science.

[3]  R. Balaban Evaluation of scientific productivity and excellence in the NHLBI Division of Intramural Research , 2013, The Journal of general physiology.

[4]  David J. Currie,et al.  Big Science vs. Little Science: How Scientific Impact Scales with Funding , 2013, PloS one.

[5]  B. Alberts Impact Factor Distortions , 2013, Science.

[6]  J. Langer Enabling Scientific Innovation , 2012, Science.

[7]  M. Lauer,et al.  On the value of portfolio diversity in heart, lung, and blood research. , 2012, Circulation research.

[8]  David L. Kaplan Social Choice at NIH: the principle of complementarity , 2011, FASEB journal : official publication of the Federation of American Societies for Experimental Biology.

[9]  John P. A. Ioannidis,et al.  More time for research: Fund people not projects , 2011, Nature.

[10]  Nicholas Graves,et al.  Funding grant proposals for scientific research: retrospective analysis of scores by members of grant review panel , 2011, BMJ : British Medical Journal.

[11]  Udaya B. Kogalur,et al.  High-Dimensional Variable Selection for Survival Data , 2010 .

[12]  Pierre Azoulay,et al.  Incentives and Creativity: Evidence from the Academic Life Sciences , 2009 .

[13]  Brian A. Jacob,et al.  The Impact of Research Grant Funding on Scientific Productivity , 2007, Journal of public economics.

[14]  V. Demicheli,et al.  Peer review for improving the quality of grant applications. , 2007, The Cochrane database of systematic reviews.

[15]  James Brophy,et al.  Peering at peer review revealed high degree of chance associated with funding of grant applications. , 2006, Journal of clinical epidemiology.

[16]  Marjori Matzke,et al.  F1000Prime recommendation of An index to quantify an individual's scientific research output. , 2005 .

[17]  Not-so-deep impact , 2005, Nature.