What are we measuring? Refocusing on some fundamentals in the age of desktop bibliometrics.

The central challenge in bibliometrics is finding the best ways to represent complex constructs like 'quality,' 'impact' or 'excellence' using quantitative methods. The marketplace for bibliometric data and services has evolved rapidly and users now face quite unprecedented choice when it comes to the range of data now available: from traditional citation-based indicators to reader ratings and Wikipedia mentions. Choice and ease of access have democratised bibliometrics and this is a tool now available to everyone. The era of 'desktop bibliometrics' should be welcomed: it promises greater transparency and the opportunity for experimentation in a field that has frankly become a little jaded. The downside is that we are in danger of chasing numbers for numbers' sake, with little understanding of what they mean. There is a looming crisis in construct validity, fuelled by supply side choice and user-side impatience, and this has significant implications for all stakeholders in the research evaluation space.

[1]  Lutz Bornmann,et al.  Alternative metrics in scientometrics: a meta-analysis of research into three altmetrics , 2014, Scientometrics.

[2]  Birger Hjørland,et al.  Domain analysis in information science Eleven approaches traditional as well as innovative , 2002 .

[3]  Cameron Stewart Barnes,et al.  The construct validity of the h-index , 2016, J. Documentation.

[4]  Vincent Larivière,et al.  Long-term variations in the aging of scientific literature: From exponential growth to steady-state science (1900–2004) , 2008 .

[5]  Stevan Harnad,et al.  Validating research performance metrics against peer rankings , 2008 .

[6]  Lutz Bornmann,et al.  Interrater reliability and convergent validity of F1000Prime peer review , 2014, J. Assoc. Inf. Sci. Technol..

[7]  Olesya Mryglod,et al.  Predicting results of the Research Excellence Framework using departmental h-index , 2014, Scientometrics.

[8]  Adam Eyre-Walker,et al.  The Assessment of Science: The Relative Merits of Post-Publication Review, the Impact Factor, and the Number of Citations , 2013, PLoS biology.

[9]  Lutz Bornmann,et al.  Does quality and content matter for citedness? A comparison with para-textual factors and over time , 2015, J. Informetrics.

[10]  Steven A Greenberg,et al.  How citation distortions create unfounded authority: analysis of a citation network , 2009, BMJ : British Medical Journal.

[11]  T. Jappelli,et al.  Bibliometric Evaluation vs. Informed Peer Review: Evidence from Italy , 2013, SSRN Electronic Journal.

[12]  David M. Shotton,et al.  CiTO, the Citation Typing Ontology , 2010, J. Biomed. Semant..

[13]  S. Bradford "Sources of information on specific subjects" by S.C. Bradford , 1985 .

[14]  Mike Thelwall,et al.  The metric tide: report of the independent review of the role of metrics in research assessment and management , 2015 .

[15]  R. Cagan The San Francisco Declaration on Research Assessment , 2013, Disease Models & Mechanisms.

[16]  Ian Rowlands,et al.  Social media use in the research workflow , 2011 .

[17]  Jesper W. Schneider Null hypothesis significance tests. A mix-up of two different theories: the basis for widespread confusion and numerous misinterpretations , 2014, Scientometrics.

[18]  Yves Gingras,et al.  Bibliometrics and Research Evaluation: Uses and Abuses , 2016 .

[19]  Mike Thelwall,et al.  Trending Twitter topics in English: An international comparison , 2012, J. Assoc. Inf. Sci. Technol..

[20]  S. Messick Validity of Psychological Assessment: Validation of Inferences from Persons' Responses and Performances as Scientific Inquiry into Score Meaning. Research Report RR-94-45. , 1994 .

[21]  Lutz Bornmann,et al.  Sampling issues in bibliometric analysis , 2014, J. Informetrics.

[22]  Charles Oppenheim,et al.  Citation counts and the Research Assessment Exercise V: Archaeology and the 2001 RAE , 2003, J. Documentation.

[23]  Christoph Bartneck,et al.  Detecting h-index manipulation through self-citation analysis , 2010, Scientometrics.

[24]  R. Todeschini,et al.  Handbook of Bibliometric Indicators: Quantitative Tools for Studying and Evaluating Research , 2016 .

[25]  Vincent Larivière,et al.  Modeling a century of citation distributions , 2008, J. Informetrics.

[26]  Steven Wooding,et al.  UK Doubles Its “World-Leading” Research in Life Sciences and Medicine in Six Years: Testing the Claim? , 2015, PloS one.

[27]  M. Walport,et al.  Looking for Landmarks: The Role of Expert Review and Bibliometric Analysis in Evaluating Scientific Publication Outputs , 2009, PloS one.