HILE BOTH interdisciplinary research and evaluations grow throughout the science system, the two meet each other with increasing frequency. More and more assessments — of manuscripts, project proposals, funding programmes, and research organisations — are confronted by interdisciplinarity, that is, by research that combines knowledge from different fields. The problem of how to assess interdisciplinary research is thus becoming more and more pressing. The common response to this problem by evaluators is to ‘muddle through’ by slightly adapting evaluation procedures for disciplinary research. British funding agencies adapted the weight of assessment criteria for some small grant schemes aimed at encouraging interdisciplinary research by putting emphasis on the applicant’s track record and the potential impact of the interdisciplinary collaboration rather than experimental details (O’Toole, 2001). Members of the Canadian Research Council proposed the opposite, namely to put less emphasis on the track record when applicants start to work in a field that is new to them (NSERC, 2004). US funding agencies introduced a procedural solution by giving their managers leeway to put a higher priority on interdisciplinary proposals that peer reviewers seem to have unjustly overlooked (Brainard, 2002). British and Canadian funding agencies introduced additional interdisciplinary committees (POST, 2002: 4; INST, 2002: chap. 3). This strategy not only brings competent reviewers together but also avoids direct competition between interdisciplinary and disciplinary grant proposals, because the latter are ranked separately (Brainard, 2002). These experiments confirm that there is no consensus about the best way of assessing interdisciplinary research. What assistance can be offered by science studies? Not much. While studies on both interdisciplinary research and research evaluation (in particular of the peer-review mechanism) have a long tradition, there is hardly any study which deals with the intersection of both. Studies on interdisciplinarity concentrated on the actual research process, often with the aim of finding conditions that promote or hinder (see for example the contributions in Weingart and Stehr, 2000), without taking the assessment of such processes into account (an exception is Hackett’s chapter in that volume). The problem of interdisciplinarity has surfaced in studies of peer-review processes with reviewers from different fields. These studies revealed that it can be difficult to integrate different scientific perspectives of reviewers in grant review processes (eg Porter and Rossini, 1985; Travis and Collins, 1991) or in the review of journal articles (e.g. Fiske and Fogg, 1990; Mahoney, 1977). Specific precautions are necessary to make sure that interdisciplinary research is not the looser in the assessment process. Procedure matters, as it is clearly stated in the recommendations of a recent workshop on “Quality Assessment in Interdisciplinary Research and Education” of the American Association for the Advancement of Science. ‘Getting the process right’ is one of the central challenges of the evaluation of W
[1]
T. V. Leeuwen,et al.
Assessment of the scientific basis of interdisciplinary, applied research: Application of bibliometric methods in Nutrition and Food Research
,
2002
.
[2]
M. Mahoney.
Publication prejudices: An experimental study of confirmatory bias in the peer review system
,
1977,
Cognitive Therapy and Research.
[3]
Anthony F. J. van Raan,et al.
Advanced bibliometric methods as quantitative core of peer review based evaluation and foresight exercises
,
1996,
Scientometrics.
[4]
Alan L. Porter,et al.
Peer Review of Interdisciplinary Research Proposals
,
1985
.
[5]
D. W. Fiske,et al.
But the Reviewers Are Making Different Criticisms of My Paper! Diversity and Uniqueness in Reviewer Comments.
,
1990
.
[6]
Harold Maurice Collins,et al.
New Light on Old Boys: Cognitive and Institutional Particularism in the Peer Review System
,
1991
.