Research Assessment and Bibliometrics: Bringing Quality Back In

Introduction Bibliometric indicators are used to compare research performances and also to assess and evaluate research performance (see, e.g. GimenezToledo et al., 2007; Lane, 2010). However, recently scholars voice protest against bibliometric assessments (see, e.g., Lawrence, 2002; Molinie & Bodenhausen, 2010; Drubin, 2014). The arguments put forward are manifold. For example, the application of the impact factor, which is often used, but not meant, to evaluate individual researchers, is criticized (DORA, 2013). Then, there are myriads of perverse or unintended effects, like focus on high impact journals and mainstream topics, focus on review articles and short communications, strategic behavior, or lack of replication because of the low reputation of replication studies (e.g., Butler, 2007; Lawrence, 2003; Mooneshinghe et al., 2007). Furthermore, scholars from the social sciences and humanities (SSH) criticize that that bibliometric indicators cannot capture quality (e.g., Plumpe, 2009). The authors of this paper were involved in a project to develop quality criteria and indicators for humanities research (see http://www.psh.ethz.ch/crus). Here, we argue that while bibliometric indicators and methods are powerful tools to describe research practices and, to some extent, scientific impact, there are some problems when they are readily used as quality indicators in research assessments. We feel that also other disciplines can learn from the critique of humanities scholars on simplistic quantitative assessments and from the findings of the research on quality in the humanities.

[1]  E. Giménez-Toledo,et al.  From experimentation to coordination in the evaluation of Spanish scientific journals in the humanities and social sciences , 2007 .

[2]  P. Lawrence Rank injustice , 2002, Nature.

[3]  Edward Towpik,et al.  Sand Francisco Declaration on Research Assessment (DORA) , 2013 .

[4]  M. Khoury,et al.  Most Published Research Findings Are False—But a Little Replication Goes a Long Way , 2007, PLoS medicine.

[5]  G. Bodenhausen,et al.  Bibliometrics as weapons of mass citation. , 2010, Chimia.

[6]  Sven E. Hug,et al.  Criteria for assessing research quality in the humanities: a Delphi study among scholars of English literature, German literature and art history , 2013 .

[7]  J. Lane Let's make science metrics more scientific , 2010, Nature.

[8]  Hans-Dieter Daniel,et al.  Four types of research in the humanities: Setting the stage for research quality criteria in the humanities , 2012 .

[9]  Hans-Dieter Daniel,et al.  Indicators for Research Quality in the Humanities: Opportunities and Limitations , 2012 .

[10]  Hans-Dieter Daniel,et al.  A Framework to Explore and Develop Criteria for Assessing Research Quality in the Humanities , 2012 .

[11]  L. Butler,et al.  Assessing university research: A plea for a balanced approach , 2007 .

[12]  P. Lawrence The politics of publication , 2003, Nature.

[13]  R. Cagan The San Francisco Declaration on Research Assessment , 2013, Disease Models & Mechanisms.

[14]  G. Bodenhausen,et al.  Bibliometrics as Weapons of Mass Citation - La bibliométrie comme arme de citation massive , 2010 .