Improving Referee-Selection and Manuscript Evaluation

To maximize the chance that manuscripts will be evaluated by the most competent referees, four systematic procedures arc proposed. One expands the pool of potential referees. Another selects a set of people from that pool to evaluate a given manuscript. The third solicits from them judgments along five scales. The fourth procedure combines these judgments into a form of consensus to help the editor make a decision. The five scales involve measures of relevance to the journal’s objectives, significance of the problem, validity of the results, novelty and clarity. Limited experiments with field trials over the past five years with the Journal of the Association for Computing Machines show that implementations of some aspects of these procedures help to improve the quality of accepted papers. The quality of a paper is interpreted primarily as a likelihood of its being appropriately cited. Analysis of these algorithms leads to interesting research problems, such as the possibility of non-subjective measures of quality by the use of modified co-citation counts. The use of computer conferencing to aid communication among referees, and among them, the author and editors, is analyzed for its potentialities and constraints.

[1]  Joshua Lederberg,et al.  When Computers 'Talk' to Computers , 1975 .

[2]  J. Ziman Information, Communication, Knowledge , 1969, Nature.

[4]  Manfred Kochen,et al.  Principles of information retrieval , 1974 .

[5]  William D. Garvey,et al.  Changing the system: Innovations in the interactive social system of scientific communication , 1976, Inf. Process. Manag..

[6]  Joseph C. Shipman,et al.  Acceptance and rejection of manuscripts , 1975, IEEE Transactions on Professional Communication.

[7]  Theodore D. Sterling Humanizing Computerized Information Systems , 1975, Science.

[8]  R. Merton,et al.  Patterns of evaluation in science: Institutionalisation, structure and functions of the referee system , 1971 .

[9]  Valiollah Tahani,et al.  A fuzzy model of document retrieval systems , 1976, Inf. Process. Manag..

[10]  F. Ingelfinger Peer review in biomedical publication. , 1974, The American journal of medicine.

[11]  Manfred Kochen Quality Control in the Publishing Process and Theoretical Foundations for Information Retrieval , 1971 .

[12]  A Etzioni The need for quality filters in information systems. , 1971, Science.

[13]  S. Goudsmit What happened to my paper , 1969 .

[14]  B. C. Griffith,et al.  The Structure of Scientific Literatures I: Identifying and Graphing Specialties , 1974 .

[15]  M. M. Kessler Bibliographic coupling between scientific papers , 1963 .

[16]  T. Sterling Publication Decisions and their Possible Effects on Inferences Drawn from Tests of Significance—or Vice Versa , 1959 .

[17]  Conyers Herring Distill or drown: The need for reviews , 1968 .

[18]  M. L. Neufeld,et al.  Uncitedness of articles in the Journal of the American Chemical Society , 1974, Inf. Storage Retr..

[19]  J. H. Noble,et al.  Peer review: quality control of applied social research. , 1974, Science.

[20]  Murray Turoff,et al.  The Delphi Method: Techniques and Applications , 1976 .

[21]  D. Price,et al.  Some remarks on elitism in information and the invisible college phenomenon in science , 1971 .

[22]  R. Merton,et al.  Sociology of refereeing , 1971 .