Routine monitoring of performance: what makes health research and development different?

Increasing attention is being directed to measuring and monitoring the use of health-related R&D funding, partly to justify this expenditure and partly to ensure that R&D effort is directed to achieving the paybacks desired by funders. These paybacks include contributing to knowledge, contributing to R&D capacity, political benefits, benefits to the health service and to patients, and more general economic benefits. This paper addresses the issues that must be considered when designing a routine performance management system for health R&D. Conventional methods of routine performance management are often rendered inappropriate in this context by the intangible and unpredictable outcomes of research, which are heterogeneous across projects and programmes and which can be hard to attribute to particular R&D support. Instead, to be effective in this context, a routine system must combine quantitative and qualitative indicators, utilising information from a number of different sources. The system must achieve acceptable levels (defined by the funder) on each of the following criteria: it must measure those dimensions of payback that are valued by the funder; it must be decision-relevant; it must be consistent with truthful compliance; it must minimise perverse incentives; and it must have acceptable net costs. It is vitally important that the system itself generates a positive payback. We illustrate these issues by outlining a system that might be used to monitor the payback from government-funded R&D.

[1]  Martin Cave,et al.  Output and Performance Measurement in Government: The State of the Art , 1990 .

[2]  Jonathan Grant,et al.  Evaluating the outcomes of biomedical research on healthcare , 1999 .

[3]  Steve Hanney,et al.  Evaluating the Benefits from Health Research and Development Centres , 2000 .

[4]  Louise Fitzgerald,et al.  Getting Evidence into Clinical Practice: An Organisational Behaviour Perspective , 2000, Journal of health services research & policy.

[5]  R. Mannion,et al.  Assessing the performance of NHS hospital trusts: the role of 'hard' and 'soft' information. , 1999, Health policy.

[6]  Christopher Pollitt,et al.  Managerialism and the Public Services: Cuts or Cultural Change in the 1990S? , 1993 .

[7]  Julia Melkers,et al.  Evaluating R&D impacts : methods and practice , 1993 .

[8]  Steve Hanney,et al.  How Can Payback from Health Services Research Be Assessed? , 1996, Journal of health services research & policy.

[9]  Andrew Pettigrew,et al.  The New Public Management in Action , 1996 .

[10]  L. Georghiou,et al.  Evaluation of Research: A Selection of Current Practices , 1987 .

[11]  N. Black,et al.  Where do UK health services researchers publish their findings? , 1999, Journal of the Royal Society of Medicine.

[12]  M. Henkel,et al.  Government and Research: The Rothschild Experiment in a Government Department , 1983 .

[13]  Susan E. Cozzens,et al.  The knowledge pool: Measurement challenges in evaluating fundamental research programs , 1997 .

[14]  N. Black A national strategy for research and development: lessons from England. , 1997, Annual review of public health.

[15]  C. W. Sherwin,et al.  FIRST INTERIM REPORT ON PROJECT HINDSIGHT (SUMMARY) , 1966 .