Making Lists, Enlisting Scientists: The Bibliometric Indicator, Uncertainty and Emergent Agency

The question of how to measure research quality recently gained prominence in the context of Danish research policy, as part of implementing a new model for the allocating of funds to universities. The measurement device took the form of a bibliometric indicator. Analyzing the making of the indicator, the paper engages the literature on social studies of quantification and classification. The analysis proceeds from the inside out, through description of the organizational processes and classificatory disputes through which the indicator was developed. It addresses questions such as: How was the indicator conceptualised? How were notions of scientific knowledge and collaboration inscribed and challenged in the process? The analysis shows a two-sided process in which scientists become engaged in making lists but which is simultaneously a way for research policy to enlist scientists. In conclusion, the analysis offers suggestions for a reorientations of the of study emergent quantification systems.

[1]  Lois Quam,et al.  The Audit Society: Rituals of Verification , 1998 .

[2]  Wiebe E. Bijker,et al.  Science in action : how to follow scientists and engineers through society , 1989 .

[3]  I. Hacking The looping effects of human kinds , 1995 .

[4]  Theodore M. Porter,et al.  Quantification and the Accounting Ideal in Science , 1992 .

[5]  W. Espeland,et al.  Rankings and Reactivity: How Public Measures Recreate Social Worlds1 , 2007, American Journal of Sociology.

[6]  J. Matthews Trust in numbers: The pursuit of objectivity in science and public life , 1996 .

[7]  D. MacKenzie,et al.  Do Economists Make Markets?: On the Performativity of Economics , 2007 .

[8]  Richard Rottenburg,et al.  Far-Fetched Facts: A Parable of Development Aid , 2009 .

[9]  Peter Weingart,et al.  Impact of bibliometrics upon the science system: Inadvertent consequences? , 2005, Scientometrics.

[10]  S. van Thiel,et al.  The Performance Paradox in the Public Sector , 2002 .

[11]  F. Hansson,et al.  Organizational Use of Evaluations , 2006 .

[12]  Ontologies for Developing Things: Making Health Care Futures Through Technology , 2010 .

[13]  Wendy Nelson Espeland,et al.  The Discipline of Rankings: Tight Coupling and Organizational Change , 2009 .

[14]  Katharine Barker,et al.  The UK Research Assessment Exercise: the evolution of a national research evaluation system , 2007 .

[15]  Susan Leigh Star,et al.  Sorting Things Out: Classification and Its Consequences , 1999 .

[16]  Margit Osterloh,et al.  Academic Rankings and Research Governance , 2010 .

[17]  Theodore M. Porter,et al.  Making Things Quantitative , 1994, Science in Context.

[18]  A. Pickering How Professors Think: Inside the Curious World of Academic Judgment. By Michèle Lamont. Cambridge, Mass.: Harvard University Press, 2009. Pp. 330. $27.95. , 2010 .

[19]  T. Gieryn Cultural Boundaries of Science: Credibility on the Line , 1999 .

[20]  J. Schneider An Outline of the Bibliometric Indicator Used for Performance-Based Funding of Research Institutions in Norway , 2009 .