The Measurement of Research Quality [President's Column]
暂无分享,去创建一个
university on research bibliometrics, defined as “the quantitative analysis of scholarly output.” It was something of an eye-opener. Of course, many of us are aware of impact factors, h-indices, and so forth, but what surprised me was the sheer scale and sophistication of the apparatus that has emerged in recent years to assign a numerical value to aspects of research performance. These metrics are increasingly used for all kinds of important decisions, especially in the academic world: who will be appointed or promoted, what research grants will be awarded, how the research performance of groups or even entire universities will be evaluated. They acquire a certain legitimacy in that clearly excellent work achieves a high score while obviously weak research scores poorly. However, in between (at least in my view), the situation is much more complex and problematic. Most metrics are based on the quantification of citation impact, which varies widely within disciplines. They are also poor at measuring impact with practitioners: for example, a paper may have a relatively low number of citations but might be extensively downloaded from IEEE Xplore and widely used by working engineers, leading to a high impact not measured by the common impact factors. Particularly disquieting is the ease with which a particular research metric can quickly become a proxy for research quality—even though the link is not al ways clear or justified. Ultimately, I feel that the assessment of research quality is a judgment best made by experts in the field, and even that assessment may change significantly over time: there are plenty of examples of papers whose value became apparent only many years after their publication. It seems to me that the surest indicator of research quality within a community is the existence of a true quality culture, together with a commitment to upholding research