F 1000 , Mendeley and Traditional Bibliometric Indicators

This article compares the Faculty of 1000 (F1000) quality filtering results and Mendeley usage data with traditional bibliometric indicators, using a sample of 1397 Genomics and Genetics articles published in 2008 selected by F1000 Faculty Members (FMs). Both Mendeley user counts and F1000 article factors (FFas) correlate significantly with citation counts and associated Journal Impact Factors. However, the correlations for Mendeley user counts are much larger than those for FFas. It may be that F1000 is good at disclosing the merit of an article from an expert practitioner point of view while Mendeley user counts may be more closely related to traditional citation impact. Articles that attract exceptionally many citations are generally disorder or disease related, while those with extremely high social bookmark user counts are mainly historical or introductory.

[1]  E. Garfield Journal impact factor: a brief review. , 1999, CMAJ : Canadian Medical Association journal = journal de l'Association medicale canadienne.

[2]  Richard K. Belew,et al.  Scientific impact quantity and quality: Analysis of two sources of bibliographic data , 2005, ArXiv.

[3]  Revolutionizing peer review? , 2005, Nature Neuroscience.

[4]  N. Mohaghegh,et al.  WHY THE IMPACT FACTOR OF JOURNALS SHOULD NOT BE USED FOR EVALUATING RESEARCH , 2005 .

[5]  P. Lawrence The mismeasurement of science , 2007, Current Biology.

[6]  M. Thelwall,et al.  Google Scholar citations and Google Web-URL citations: A multi-discipline exploratory analysis , 2007 .

[7]  Antal van den Bosch,et al.  Recommending scientific articles using citeulike , 2008, RecSys '08.

[8]  C. Neylon,et al.  Article-Level Metrics and the Evolution of Scientific Impact , 2009, PLoS biology.

[9]  M. Walport,et al.  Looking for Landmarks: The Role of Expert Review and Bibliometric Analysis in Evaluating Scientific Publication Outputs , 2009, PloS one.

[10]  Henk F. Moed,et al.  Measuring contextual citation impact of scientific journals , 2009, J. Informetrics.

[11]  Mike Thelwall,et al.  Using the Web for research evaluation: The Integrated Online Impact indicator , 2010, J. Informetrics.

[12]  D. Wardle Do 'Faculty of 1000' (F1000) ratings of ecological publications serve as reasonable predictors of their future impact? , 2010 .

[13]  Pertti Vakkari,et al.  Cross Country Comparison of Scholarly E-Reading Patterns in Australia, Finland, and the United States , 2010 .

[14]  Bradley M. Hemminger,et al.  Scientometrics 2.0: New metrics of scholarly impact on the social Web , 2010, First Monday.

[15]  Mike Thelwall,et al.  Validating online reference managers for scholarly impact measurement , 2011, Scientometrics.

[16]  Tobias Siebenlist,et al.  Applying social bookmarking data to evaluate journal usage , 2011, J. Informetrics.

[17]  Gunther Eysenbach,et al.  Can Tweets Predict Citations? Metrics of Social Impact Based on Twitter and Correlation with Traditional Metrics of Scientific Impact , 2011, Journal of medical Internet research.

[18]  Pippa Smart,et al.  Peer Review in Academic Promotion and Publishing: Its Meaning, Locus and Future , 2011, Learn. Publ..

[19]  M. Krauthammer,et al.  Exploring the use of social media to measure journal article impact. , 2011, AMIA ... Annual Symposium proceedings. AMIA Symposium.

[20]  Heather A. Piwowar,et al.  Biology Needs a Modern Assessment System for Professional Productivity , 2011 .

[21]  Diego Ponte,et al.  Scholarly Communication 2.0: Exploring Researchers' Opinions on Web 2.0 for Scientific Knowledge Creation, Evaluation and Dissemination , 2011 .

[22]  Mike Thelwall Journal impact evaluation: a webometric perspective , 2012, Scientometrics.