Bibliometric statistical properties of the 100 largest European research universities: Prevalent scaling rules in the science system

The statistical properties of bibliometric indicators related to research performance, field citation density, and journal impact were studied for the 100 largest European research universities. A size-dependent cumulative advantage was found for the impact of universities in terms of total number of citations. In the author's previous work, a similar scaling rule was found at the level of research groups. Therefore, this scaling rule is conjectured to be a prevalent property of the science system. The lower performance universities have a larger size-dependent cumulative advantage for receiving citations than top performance universities. For the lower performance universities, the fraction of noncited publications decreases considerably with size. Generally, the higher the average journal impact of the publications of a university, the lower the number of noncited publications. The average research performance was found not to dilute with size. Evidently, large universities, particularly top performance universities are characterized by being “big and beautiful.” They succeed in keeping a high performance over a broad range of activities. This most probably is an indication of their overall attractive scientific and intellectual power. It was also found that particularly for the lower performance universities, the field citation density provides a strong cumulative advantage in citations per publication. The relation between number of citations and field citation density found in this study can be considered as a second basic scaling rule of the science system. Top performance universities publish in journals with significantly higher journal impact as compared to the lower performance universities. A significant decrease of the fraction of self-citations with increasing research performance, average field citation density, and average journal impact was found. © 2008 Wiley Periodicals, Inc.

[1]  J. S. Katz,et al.  The self-similar science system , 1999 .

[2]  Henk F. Moed,et al.  Bibliometric Rankings of World Universities , 2006 .

[3]  A. Raan Statistical properties of bibliometric indicators: Research group indicator distributions and correlations , 2006 .

[4]  Per Ottar Seglen,et al.  The skewness of science , 1992 .

[5]  Anthony F. J. van Raan,et al.  Measurement of Central Aspects of Scientific Research: Performance, Interdisciplinarity, Structure , 2005 .

[6]  Anthony F. J. van Raan,et al.  Advanced bibliometric methods as quantitative core of peer review based evaluation and foresight exercises , 1996, Scientometrics.

[7]  Anthony F. J. van Raan,et al.  Performance-related differences of bibliometric statistical properties of research groups: Cumulative advantages and hierarchically layered networks , 2006 .

[8]  M. Westoby,et al.  Bivariate line‐fitting methods for allometry , 2006, Biological reviews of the Cambridge Philosophical Society.

[9]  Nian Cai Liu,et al.  The Academic Ranking of World Universities. , 2005 .

[10]  R. Merton The Matthew Effect in Science, II: Cumulative Advantage and the Symbolism of Intellectual Property , 1988, Isis.

[11]  Anthony F. J. van Raan Scaling rules in the science system: Influence of field-specific citation characteristics on the impact of research groups , 2008 .

[12]  R. Merton The Matthew Effect in Science , 1968, Science.

[13]  Per Ottar Seglen,et al.  Causal relationship between article citedness and journal impact , 1994 .

[14]  A. Raan Measuring Science: Capita Selecta of Current Main Issues , 2004 .

[15]  Anthony F. J. van Raan,et al.  Fatal attraction: Conceptual and methodological problems in the ranking of universities by bibliometric methods , 2005, Scientometrics.