Perspective on Citation Analysis of Scientists
暂无分享,去创建一个
The use of citation analysis to produce measures, or indicators, of scientific performance has generated a considerable amount of discussion (l-lo). Not surprisingly, the discussion grows particularly intense when the subject is the use of these measures to evaluate people, either as individuals or in small formal groups, such as departmental faculties in academic institutions. Published descriptions of how citation analysis is being used to define the history of scientific development, or to measure the activity and interaction of scientific specialties, generate relatively little comment from the scientific community at large. And what is generated tends to be calm and reasoned. In contrast, any mention of using citation analysis to measure the performance of specific individuals or groups produces an automatic, and often heatedly emotional, response from the same people who otherwise remain silent. A case in point is a 1975 review in Science (11) of the way citation analysis is being used, on an exploratory basis, by science administrators. The article included a discussion of the use of citation measures to define and monitor changes in the specialty structure of science. This application could have a major impact on the development of science policies. But the spate of letters to the editor that commented on the article dealt only with the use of citation data to help measure individuals and academic departments in cases of tenure, promotion, and grant awards. It is not surprising that the published comments these applications elicit from the general-science community are almost always critical. After all, scientists are no less sensitive to performance measures than other people. And when you consider that some 25% of the scientific papers published are never cited even once (12) and that the average annual citation count for papers that are cited is only 1.7 (13) it is not hard to understand why citation counts might seem a particularly threatening measure to some. Another reason for the defensive attitude might be the relative newness of citation data as a measure of performance. It will be interesting to compare the reactions of humanities scholars to the one of scientists when the Arts &
[1] Eugene Garfield,et al. Is the ratio between number of citations and publications cited a true constant , 1976 .
[2] A. Rossier. Letter to the Editor , 1986, Paraplegia.
[3] T. Gustafson. The Controversy Over Peer Review , 1975, Science.
[4] D Shapley. Materials research: scientists show scant taste for breaking ranks. , 1976, Science.
[5] N Wade,et al. Citation analysis: a new tool for science administrators. , 1975, Science.