A review of bibliometric and other science indicators and their role in research evaluation

Recent reductions in research budgets have led to the need for greater selectivity in resource allocation. Measures of past performance are still among the most promising means of deciding between competing interests. Bibliometry, the mea surement of scientific publications and of their impact on the scientific community, assessed by the citations they attract, provides a portfolio of indicators that can be combined to give a useful picture of recent research activity. In this state-of-the- art review the various methodologies that have been developed are outlined in terms of their strengths, weaknesses and par ticular applications. The present limitations of science indica tors in research evaluation are considered and some future directions for developments in techniques are suggested.

[1]  H. Moed,et al.  The use of bibliometric data for the measurement of university research performance , 1985 .

[2]  Abraham Charnes,et al.  Search-Theoretic Models of Organization Control by Budgeted Multiple Goals , 1966 .

[3]  Parina Hassanaly,et al.  Information Systems and Scientometric Study in Chemical Oceanography , 1986 .

[4]  E. J. Barboni,et al.  Co-Citation Analyses of Science: An Evaluation , 1977 .

[5]  Daryl E. Chubin,et al.  Peer Review at the NSF: A Dialectical Policy Analysis , 1979 .

[6]  G. G. Stokes "J." , 1890, The New Yale Book of Quotations.

[7]  J. R. Cole,et al.  Peer review and the support of science. , 1977, Scientific American.

[8]  Alun Anderson Research gradings stir emotions , 1986, Nature.

[9]  D. Edge Quantitative Measures of Communication in Science: A Critical Review , 1979, History of science; an annual review of literature, research and teaching.

[10]  Rune Fransson Resource Allocation Based on Evaluation of Research. , 1985 .

[11]  Duncan Lindsey,et al.  Production and Citation Measures in the Sociology of Science: The Problem of Multiple Authorship , 1980 .

[12]  A. Porter Citation Analysis: Queries and Caveats , 1977 .

[13]  Lyle V. Jones,et al.  The assessment of scholarship , 1980 .

[14]  Francis Narin,et al.  An analysis of research publications supported by NIH 1973-76 and 1977-80 : National Institutes of Health and NIH Intramural Program , 1986 .

[15]  I. Spiegel-Rosing Science Studies: Bibliometric and Content Analysis , 1977 .

[16]  Diana Hicks,et al.  Bibliometric Techniques for Monitoring Performance in Technologically Oriented Research: The Case of Integrated Optics , 1986 .

[17]  Philip G. Pardey,et al.  Public sector production of agricultural knowledge , 1986 .

[18]  Richard C. Anderson,et al.  Publication ratings versus peer ratings of universities , 1978, J. Am. Soc. Inf. Sci..

[19]  Richard C. Anderson,et al.  Comparison of peer and citation assessment of the influence of scientific journals , 1980, J. Am. Soc. Inf. Sci..

[20]  Steve Aaronson The Footnotes of Science. , 1975 .

[21]  Advisory Board for the Research Councils , 1973, Nature.

[22]  Francis Narin,et al.  The adequacy of the science citation index (SCI) as an indicator of international scientific activity , 1981, J. Am. Soc. Inf. Sci..

[23]  Thomas von Waldkirch Ten Years of Project-Oriented Allocation of Resources at ETH Zurich: Review and Evaluation of Experiences. , 1985 .

[24]  Tim Peacock,et al.  Charting the decline in British science , 1985, Nature.

[25]  EUGENE,et al.  Uses and Misuses of Citation Frequency , 1985 .

[26]  M. M. Kessler Comparison of the results of bibliographic coupling and analytic subject indexing , 1965 .

[27]  J. R. Cole,et al.  The Ortega Hypothesis , 1972, Science.

[28]  E. J. Barboni,et al.  The State of a Science: Indicators in the Specialty of Weak Interactions , 1977 .

[29]  Peter Vinkler,et al.  Management system for a scientific research institute based on the assessment of scientific publications , 1986 .

[30]  Maurice Holt,et al.  Evaluating the Evaluators , 1981 .

[31]  Alan E. Bayer,et al.  Validity of citation criteria for assessing the influence of scientific publications: New evidence with peer assessment , 1983, J. Am. Soc. Inf. Sci..

[32]  Michael J. Moravcsik,et al.  Variation of the nature of citation measures with journals and scientific specialties , 1978, J. Am. Soc. Inf. Sci..

[33]  Henry G. Small,et al.  Clustering thescience citation index® using co-citations - I. A comparison of methods , 1985, Scientometrics.

[34]  M. Callon,et al.  From translations to problematic networks: An introduction to co-word analysis , 1983 .

[35]  B. Martin,et al.  Assessing Basic Research : Some Partial Indicators of Scientific Progress in Radio Astronomy : Research Policy , 1987 .

[36]  G. Barkdoll,et al.  "Evaluating the Evaluators" , 1978, Evaluation & the health professions.

[37]  J. R. Cole,et al.  Chance and consensus in peer review. , 1981, Science.

[38]  Ben R. Martin,et al.  Assessing Basic Research: The Case of the Isaac Newton Telescope , 1983 .

[39]  P. McAllister,et al.  Relationship between R&D expenditures and publication output for U.S. colleges and universities , 1981 .

[40]  D. Hicks Limitations of Co-Citation Analysis as a Tool for Science Policy , 1987 .

[41]  Douglas H. McQueen,et al.  Innovation output and academic performance at Chalmers University of Technology , 1984 .

[42]  W. Broad The publishing game: getting more for less. , 1981, Science.

[43]  L P Reynolds Agricultural r&d. , 1987, Science.

[44]  Francis Narin,et al.  Characterization of the research papers of U.S. medical schools , 1983, J. Am. Soc. Inf. Sci..

[45]  M. Callon,et al.  Mapping the Dynamics of Science and Technology , 1986 .