The focus of science evaluation is based on research institutions as creators of a steadily growing, multidisciplinary scientific output [Price, 1963]. These compete with each other to rank among the leading institutions in their disciplines and also to document their position through the perception of their publications. Since the range of publications is ever increasing worldwide, a global competition takes place [see Mervis, 2007; Broad, 2004] with the scientific institutions as its main actors. The aim is to achieve the highest visibility for institutions and countries [Da Pozzo et al., 2001]. Especially for multidisciplinary institutions, the evaluation of an institution's ranking in comparison to a benchmark is not easy to conduct [Adam, 2002]. When comparing on an interdisciplinary basis, a normalisation must be carried out: "Citation (and publication) practices vary between fields and over time" [Garfield, 1989] because the disciplines fall back on different methods to identify problems and to tackle them. Here, different communication methods also come into play.
[1]
Bernhard Mittermaier,et al.
Creation of journal-based publication profiles of scientific institutions — A methodology for the interdisciplinary comparison of scientific research based on the J-factor
,
2009,
Scientometrics.
[2]
Roland Wagner- D246bler.
The system of research and development indicators: Entry points for information agents
,
2005
.
[3]
Jeffrey Mervis,et al.
U.S. Output Flattens, and NSF Wonders Why
,
2007,
Science.
[4]
Roland Wagner-Döbler.
The system of research and development indicators: Entry points for information agents
,
2005,
Scientometrics.
[5]
Zameer Shah,et al.
Measuring science
,
2004,
BMJ : British Medical Journal.
[6]
EUGENE GARFIELD.
Evaluating Research : Do Bibliornetric Indicators Provide the Best Measures ?
,
1989
.