Measuring the match between evaluators and evaluees: cognitive distances between panel members and research groups at the journal level

When research groups are evaluated by an expert panel, it is an open question how one can determine the match between panel and research groups. In this paper, we outline two quantitative approaches that determine the cognitive distance between evaluators and evaluees, based on the journals they have published in. We use example data from four research evaluations carried out between 2009 and 2014 at the University of Antwerp.While the barycenter approach is based on a journal map, the similarity-adapted publication vector (SAPV) approach is based on the full journal similarity matrix. Both approaches determine an entity’s profile based on the journals in which it has published. Subsequently, we determine the Euclidean distance between the barycenter or SAPV profiles of two entities as an indicator of the cognitive distance between them. Using a bootstrapping approach, we determine confidence intervals for these distances. As such, the present article constitutes a refinement of a previous proposal that operates on the level of Web of Science subject categories.

[1]  Jordi Molas-Gallart,et al.  Research Governance and the Role of Evaluation , 2012 .

[2]  Ronald Rousseau,et al.  Is the expertise of evaluation panels congruent with the research interests of the research groups: A quantitative approach based on barycenters , 2015, J. Informetrics.

[3]  A. Knie,et al.  Can evaluation contribute to the organizational development of academic institutions? An international comparison , 2013 .

[4]  Karen L. Adair,et al.  Is there gender bias in reviewer selection and publication success rates for the New Zealand Journal of Ecology , 2014 .

[5]  Satoru Kawai,et al.  An Algorithm for Drawing General Undirected Graphs , 1989, Inf. Process. Lett..

[6]  Sebastian Grauwin,et al.  Mapping scientific institutions , 2011, Scientometrics.

[7]  Finn Borum,et al.  The Local Construction and Enactment of Standards for Research Evaluation , 2000 .

[8]  Danielle Li,et al.  Big names or big ideas: Do peer-review panels select the best science proposals? , 2015, Science.

[9]  Yuen-Hsien Tseng,et al.  Journal clustering of library and information science for subfield delineation using the bibliometric analysis toolkit: CATAR , 2013, Scientometrics.

[10]  Andrew J Milat,et al.  A narrative review of research impact assessment models and methods , 2015, Health Research Policy and Systems.

[11]  Ludo Waltman,et al.  Vos: A New Method for Visualizing Similarities between Objects , 2006, GfKl.

[12]  Thomas H. P. Gould Do We Still Need Peer Review?: An Argument for Change , 2012 .

[13]  Frances Lawrenz,et al.  Expert panel reviews of research centers: the site visit process. , 2012, Evaluation and program planning.

[14]  Katja Hofmann,et al.  Contextual factors for finding similar experts , 2010 .

[15]  M. Kenward,et al.  An Introduction to the Bootstrap , 2007 .

[16]  Ludo Waltman,et al.  A new methodology for constructing a publication-level classification system of science , 2012, J. Assoc. Inf. Sci. Technol..

[17]  Pawel Sobkowicz,et al.  Innovation Suppression and Clique Evolution in Peer-Review-Based, Competitive Research Funding Systems: An Agent-Based Model , 2015, J. Artif. Soc. Soc. Simul..

[18]  Eric H. J. Spruyt,et al.  Changing publication patterns in the Social Sciences and Humanities, 2000–2009 , 2012, Scientometrics.

[19]  Vincent Larivière,et al.  Exploring the interdisciplinary evolution of a discipline: the case of Biochemistry and Molecular Biology , 2014, Scientometrics.

[20]  Hamid Beigy,et al.  Expertise retrieval in bibliographic network: a topic dominance learning approach , 2013, CIKM.

[21]  Ismael Rafols,et al.  Interactive overlays: A new method for generating global journal maps from Web-of-Science data , 2011, J. Informetrics.

[22]  Ronald Rousseau,et al.  An Introduction to the Barycentre Method with an Application to China's Mean Centre of Publication , 2001 .

[23]  Kington Joe Balanced Cross Sections, Shortening Estimates, and the Magnitude of Out-of-Sequence Thrusting in the Nankai Trough Accretionary Prism, Japan , 2014 .

[24]  Ludo Waltman,et al.  Software survey: VOSviewer, a computer program for bibliometric mapping , 2009, Scientometrics.

[25]  Tim C. E. Engels,et al.  Measuring internationalisation of book publishing in the social sciences and humanities using the Barycentre method , 2013 .

[26]  Bart Nooteboom,et al.  Optimal Cognitive Distance and Absorptive Capacity , 2005 .

[27]  Qi Wang,et al.  Defining the role of cognitive distance in the peer review process with an explorative study of a grant scheme in infection biology , 2015 .

[28]  L. Butler,et al.  Evaluating University Research Performance Using Metrics , 2011 .

[29]  Denis Loveridge,et al.  The use of co‐nomination to identify expert participants for Technology Foresight , 1996 .

[30]  Anton Oleinik,et al.  Conflict(s) of Interest in Peer Review: Its Origins and Possible Solutions , 2013, Science and Engineering Ethics.

[31]  Giovanni Abramo,et al.  Evaluating research: from informed peer review to bibliometrics , 2011, Scientometrics.

[32]  Lutz Bornmann,et al.  A multilevel modelling approach to investigating the predictive validity of editorial decisions: do the editors of a high profile journal select manuscripts that are highly cited after publication? , 2011 .

[33]  Chris Fields,et al.  How small is the center of science? Short cross-disciplinary cycles in co-authorship graphs , 2015, Scientometrics.

[34]  Daniele Rotolo,et al.  Journal portfolio analysis for countries, cities, and organizations: Maps and comparisons , 2016, J. Assoc. Inf. Sci. Technol..

[35]  Krisztian Balog,et al.  ExperTime: tracking expertise over time , 2014, SIGIR.

[36]  Kevin W. Boyack,et al.  Creation of a highly detailed, dynamic, global model and map of science , 2014, J. Assoc. Inf. Sci. Technol..

[37]  Alan L. Porter,et al.  Science overlay maps: A new tool for research policy and library management , 2009, J. Assoc. Inf. Sci. Technol..

[38]  F. Caldwell Editor's Notes , 2001, The Physician and sportsmedicine.

[39]  Tim C. E. Engels,et al.  Group size, h-index, and efficiency in publishing in top journals explain expert panel assessments of research group quality and productivity , 2013 .

[40]  Djoerd Hiemstra,et al.  Multi-aspect group formation using facility location analysis , 2012, ADCS.

[41]  M. de Rijke,et al.  On the Assessment of Expertise Profiles , 2013, DIR.

[42]  Chaomei Chen,et al.  Interactive overlays of journals and the measurement of interdisciplinarity on the basis of aggregated journal-journal citations , 2013, J. Assoc. Inf. Sci. Technol..

[43]  Cassidy R. Sugimoto,et al.  Bias in peer review , 2013, J. Assoc. Inf. Sci. Technol..

[44]  Loet Leydesdorff,et al.  Can “hot spots” in the sciences be mapped using the dynamics of aggregated journal–journal citation Relations? , 2015, J. Assoc. Inf. Sci. Technol..

[45]  Ronald Rousseau,et al.  Similarity adapted publication vectors: A note and a correction on measuring cognitive distance in multiple dimensions , 2016, ArXiv.

[46]  Erhard Rahm,et al.  Comparing the scientific impact of conference and journal publications in computer science , 2008, Inf. Serv. Use.

[47]  Katharine Barker,et al.  The UK Research Assessment Exercise: the evolution of a national research evaluation system , 2007 .

[48]  Clive Baldock,et al.  Nanoscience and nanotechnology research publications: a comparison between Australia and the rest of the world , 2014, Scientometrics.

[49]  Kevin W. Boyack,et al.  Characterization of the Peer Review Network at the Center for Scientific Review, National Institutes of Health , 2014, bioRxiv.

[50]  B. Nooteboom Learning by Interaction: Absorptive Capacity, Cognitive Distance and Governance , 2000 .

[51]  魏屹东,et al.  Scientometrics , 2018, Encyclopedia of Big Data.

[52]  Jan Cornelis,et al.  Research evaluation per discipline: a peer-review method and its outcomes , 2013, ArXiv.

[53]  Jean Tague-Sutcliffe,et al.  An Introduction to Informetrics , 1992, Inf. Process. Manag..

[54]  Daniel A. Levinthal,et al.  ABSORPTIVE CAPACITY: A NEW PERSPECTIVE ON LEARNING AND INNOVATION , 1990 .

[55]  Bart Nooteboom,et al.  Inter-firm alliances: Analysis and design , 1999 .

[56]  Darko Hren,et al.  Peer Review Evaluation Process of Marie Curie Actions under EU’s Seventh Framework Programme for Research , 2015, PloS one.

[57]  Simon Wessely,et al.  Peer review of grant applications: what do we know? , 1998, The Lancet.

[58]  Hugh P McKenna Research assessment: the impact of impact. , 2015, International journal of nursing studies.

[59]  Ronald Rousseau,et al.  Assessment of expertise overlap between an expert panel and research groups , 2014 .

[60]  Rommert Dekker,et al.  A comparison of two techniques for bibliometric mapping: Multidimensional scaling and VOS , 2010, J. Assoc. Inf. Sci. Technol..

[61]  Daniel A. Levinthal,et al.  Innovation and Learning: The Two Faces of R&D , 1989 .

[62]  Loet Leydesdorff,et al.  Past performance, peer review and project selection: a case study in the social and behavioral sciences , 2009, 0911.1306.

[63]  Tim C. E. Engels,et al.  Barycenter representation of book publishing internationalization in the Social Sciences and Humanities , 2014, J. Informetrics.

[64]  Finn Hansson,et al.  Dialogue in or with the peer review? Evaluating research organizations in order to promote organizational learning , 2010 .