Measuring the societal impact of research

Even before the Second World War, governments had begun to invest public funds into scientific research with the expectation that military, economic, medical and other benefits would ensue. This trend continued during the war and throughout the Cold War period, with increasing levels of public money being invested in science. Nuclear physics was the main benefactor, but other fields were also supported as their military or commercial potential became apparent. Moreover, research came to be seen as a valuable enterprise in and of itself, given the value of the knowledge generated, even if advances in understanding could not be applied immediately. Vannevar Bush, science advisor to President Franklin D. Roosevelt during the Second World War, established the inherent value of basic research in his report to the President, Science, the endless frontier , and it has become the underlying rationale for public support and funding of science. However, the growth of scientific research during the past decades has outpaced the public resources available to fund it. This has led to a problem for funding agencies and politicians: how can limited resources be most efficiently and effectively distributed among researchers and research projects? This challenge—to identify promising research—spawned both the development of measures to assess the quality of scientific research itself, and to determine the societal impact of research. Although the first set of measures have been relatively successful and are widely used to determine the quality of journals, research projects and research groups, it has been much harder to develop reliable and meaningful measures to assess the societal impact of research. The impact of applied research, such as drug development, IT or engineering, is obvious but the benefits of basic research are less so, harder to assess and have been under increasing scrutiny since the 1990s [[1]]. In …

[1]  Les Rymer,et al.  Measuring the Impact of Research--The Context for Metric Development. Go8 Backgrounder 23. , 2011 .

[2]  Ben R. Martin,et al.  The Research Excellence Framework and the ‘impact agenda’: are we creating a Frankenstein monster? , 2011 .

[3]  L. Bornmann,et al.  The state of h index research , 2009, EMBO reports.

[4]  C. Donovan,et al.  State of the art in assessing research impact: introduction to a special issue , 2011 .

[5]  Barry Bozeman,et al.  Public Value Mapping and Science Policy Evaluation , 2011 .

[6]  Lutz Bornmann,et al.  Mimicry in science? , 2010, Scientometrics.

[7]  Richard Van Noorden,et al.  Metrics: Do metrics matter? , 2010, Nature.

[8]  Arie Rip,et al.  Evaluation of societal quality of public sector research in the Netherlands , 2000 .

[9]  E. Hazelkorn Assessing Europe's University-Based Research , 2010 .

[10]  Steven Wooding,et al.  Capturing Research Impacts , 2010 .

[11]  Lutz Bornmann,et al.  Diversity, value and limitations of the journal impact factor and alternative metrics , 2012, Rheumatology International.

[12]  Manfred Maier,et al.  Development of a practical tool to measure the impact of publications on the society based on focus group discussions with scientists , 2011, BMC public health.

[13]  Finn Hansson,et al.  Measuring research performance during a changing relationship between science and society , 2011 .

[14]  A. Salter,et al.  The economic benefits of publicly funded basic research: a critical review , 2001 .

[15]  B. V. D. Meulen Evaluating the societal relevance of academic research: A guide , 2010 .

[16]  C. Donovan,et al.  The Australian Research Quality Framework: A live experiment in capturing the social, economic, environmental, and cultural returns of publicly funded research , 2008 .

[17]  Ammon Salter,et al.  Measuring third stream activities , 2002 .

[18]  R. Frodeman,et al.  Peer review and the ex ante assessment of societal impacts , 2011 .

[19]  Richard Smith,et al.  Measuring the social impact of research , 2001, BMJ : British Medical Journal.

[20]  Paul Nightingale,et al.  Peer review and the relevance gap: Ten suggestions for policy-makers , 2007 .

[21]  Steve Hanney,et al.  Evaluating the Benefits from Health Research and Development Centres , 2000 .

[22]  Franciscus A. van Vught,et al.  U-Multirank: design and testing the feasability of a multidimensional global university ranking: final report , 2011 .