Turning the tables: A university league-table based on quality not quantity

Background: Universities closely watch international league tables because these tables influence governments, donors and students. Achieving a high ranking in a table, or an annual rise in ranking, allows universities to promote their achievements using an externally validated measure. However, league tables predominantly reward measures of research output, such as publications and citations, and may therefore be promoting poor research practices by encouraging the “publish or perish” mentality. Methods: We examined whether a league table could be created based on good research practice. We rewarded researchers who cited a reporting guideline, which help researchers report their research completely, accurately and transparently, and were created to reduce the waste of poorly described research. We used the EQUATOR guidelines, which means our tables are mostly relevant to health and medical research. We used Scopus to identify the citations. Results: Our cross-sectional tables for the years 2016 and 2017 included 14,408 papers with 47,876 author affiliations. We ranked universities and included a bootstrap measure of uncertainty. We clustered universities in five similar groups in an effort to avoid over-interpreting small differences in ranks. Conclusions: We believe there is merit in considering more socially responsible criteria for ranking universities, and this could encourage better research practice internationally if such tables become as valued as the current quantity-focused tables.

[1]  D. Altman,et al.  Effect of using reporting guidelines during peer review on quality of final manuscripts submitted to a biomedical journal: masked randomised trial , 2011, BMJ : British Medical Journal.

[2]  Wiley W. Souba Rankings. , 2008, The Journal of surgical research.

[3]  S. Pocock,et al.  Strengthening the Reporting of Observational Studies in Epidemiology (STROBE): Explanation and Elaboration , 2007, PLoS medicine.

[4]  Iain Chalmers,et al.  How to increase value and reduce waste when research priorities are set , 2014, The Lancet.

[5]  Shaher Momani,et al.  Are university rankings useful to improve research? A systematic review , 2018, PloS one.

[6]  Adam Marcus,et al.  Science publishing: The paper is not sacred , 2011, Nature.

[7]  Mario Biagioli,et al.  Watch out for cheats in citation game , 2016, Nature.

[8]  J. Ioannidis,et al.  International ranking systems for universities and institutions: a critical appraisal , 2007, BMC medicine.

[9]  Nicholas Graves,et al.  Randomly auditing research labs could be an affordable way to improve research quality: A simulation study , 2018, PloS one.

[10]  E. Fong,et al.  Authorship and citation manipulation in academic research , 2017, PloS one.

[11]  John P. A. Ioannidis,et al.  What does research reproducibility mean? , 2016, Science Translational Medicine.

[12]  R. Tibshirani,et al.  Increasing value and reducing waste in research design, conduct, and analysis , 2014, The Lancet.

[13]  Ben Goldacre,et al.  How to Get All Trials Reported: Audit, Better Data, and Individual Accountability , 2015, PLoS medicine.

[14]  Peter Taylor,et al.  Citation Statistics , 2009, ArXiv.

[15]  J. Ioannidis,et al.  The PRISMA Statement for Reporting Systematic Reviews and Meta-Analyses of Studies That Evaluate Health Care Interventions: Explanation and Elaboration , 2009, Annals of Internal Medicine [serial online].

[16]  Harvey Goldstein,et al.  Measuring Success: league tables in the public sector , 2012 .

[17]  R Core Team,et al.  R: A language and environment for statistical computing. , 2014 .

[18]  P. Glasziou,et al.  Avoidable waste in the production and reporting of research evidence , 2009, The Lancet.

[19]  David Moher,et al.  STARD 2015: an updated list of essential items for reporting diagnostic accuracy studies , 2015, BMJ : British Medical Journal.

[20]  Karina D. Torralba,et al.  Scientific productivity: An exploratory study of metrics and incentives , 2018, PloS one.

[21]  Ludo Waltman,et al.  Field-Normalized Citation Impact Indicators and the Choice of an Appropriate Counting Method , 2015, ISSI.

[22]  Anthony C. Davison,et al.  Bootstrap Methods and their Application , 1997 .

[23]  Harvey Goldstein,et al.  League Tables and Their Limitations: Statistical Issues in Comparisons of Institutional Performance , 1996 .

[24]  Ellen Hazelkorn,et al.  Rankings and the Reshaping of Higher Education , 2011 .

[25]  Richard McElreath,et al.  The natural selection of bad science , 2016, Royal Society Open Science.

[26]  D G Altman,et al.  The scandal of poor medical research , 1994, BMJ.

[27]  David M. Nichols,et al.  Metrics for openness , 2017, J. Assoc. Inf. Sci. Technol..

[28]  Samuel A. Moore,et al.  Erratum: “Excellence R Us”: university research and the fetishisation of excellence , 2017, Palgrave Communications.

[29]  D. Altman,et al.  STATISTICAL METHODS FOR ASSESSING AGREEMENT BETWEEN TWO METHODS OF CLINICAL MEASUREMENT , 1986, The Lancet.

[30]  S. Rijcke,et al.  Bibliometrics: The Leiden Manifesto for research metrics , 2015, Nature.

[31]  A. Beckett,et al.  AKUFO AND IBARAPA. , 1965, Lancet.

[32]  Vwani P. Roychowdhury,et al.  Read Before You Cite! , 2003, Complex Syst..

[33]  D. Dill,et al.  Academic quality, league tables, and public policy: A cross-national analysis of university ranking systems , 2005 .

[34]  David Moher,et al.  Consolidated standards of reporting trials (CONSORT) and the completeness of reporting of randomised controlled trials (RCTs) published in medical journals. , 2012, The Cochrane database of systematic reviews.

[35]  D. Moher,et al.  CONSORT 2010 statement: Updated guidelines for reporting parallel group randomised trials , 2010, Journal of pharmacology & pharmacotherapeutics.

[36]  David J. Winter,et al.  rentrez: An R package for the NCBI eUtils API , 2017, R J..

[37]  Andrew Thomas,et al.  WinBUGS - A Bayesian modelling framework: Concepts, structure, and extensibility , 2000, Stat. Comput..

[38]  Sara Schroter,et al.  What errors do peer reviewers detect, and does training improve their ability to detect them? , 2008, Journal of the Royal Society of Medicine.

[39]  R. Proulx Higher Education Ranking and Leagues Tables: Lessons Learned from Benchmarking , 2007 .

[40]  Harlan M Krumholz,et al.  Increasing value and reducing waste: addressing inaccessible research , 2014, The Lancet.

[41]  Iveta Simera,et al.  A history of the evolution of guidelines for reporting medical research: the long road to the EQUATOR Network , 2016, Journal of the Royal Society of Medicine.

[42]  David Moher,et al.  Reducing waste from incomplete or unusable reports of biomedical research , 2014, The Lancet.

[43]  D. Eastwood,et al.  Counting What Is Measured or Measuring What Counts? League Tables and Their Impact On Higher Education Institutions in England , 2008 .

[44]  J. Lane Let's make science metrics more scientific , 2010, Nature.

[45]  B. Frey,et al.  Do Rankings Reflect Research Quality? , 2008, SSRN Electronic Journal.

[46]  Martin Fieder,et al.  Too much noise in the Times Higher Education rankings , 2010, Scientometrics.