Bibliometric big data and social media tools provide new opportunities to aggregate and analyze researchers’ scholarly impact. The purpose of the current paper is to describe the process and results we obtained after aggregating a list of public Google Scholar profiles representing researchers in Health Policy and Management or closely-related disciplines. We extracted publication and citation data on 191 researchers who are affiliated with health administration programs in the U.S. With these data, we created a publicly available listing of faculty that includes each person’s name, affiliation, year of first citation, total citations, h-index and i-10 index. The median of total citations per individual faculty member was 700, while the maximum was 46,363. The median h-index was 13, while the maximum was 91. We plan to update these statistics and add new faculty to our public listing as new Google Scholar profiles are created by faculty members in the field. This listing provides a resource for students and faculty in our discipline to easily compare productivity and publication records of faculty members in their own and other departments. Similarly, this listing provides a resource for faculty, including department chairs and deans, who desire discipline-specific context for promotion and tenure processes. INTRODUCTION Objectively and consistently measuring individuals’ research productivity is an important part of academic processes including hiring decisions, annual reviews, grant funding decisions, and tenure and promotion assessments. Despite many potential problems and caveats, the publication is a widely available and recognized unit of productivity (Petersen, Wang, & Stanley, 2010; Wilhite & Fong, 2012). Moreover, in an era of growing public scrutiny of academic effort and activities and increased competition for funding resources, a listing of a researcher’s publications provides one piece of quantitative evidence that we can easily observe. Bibliometric analyses range from the rather crude to the complex. On one end of the spectrum is the raw count of published articles in a given timeframe (e.g., annually). Beyond counts, quality is often assessed by journal impact factor (Garfield, 2006) or journal reputation within the discipline (Brooks, Walker, & Szorady, 1991; Menachemi, Hogan, & DelliFraine, 2015). Other researcher-level metrics that are commonly used include a researcher’s total number of citations, median number of citations, and the h-index (Hirsch, 2005). The h-index is defined as an individual researchers’ number of publications “h” that have at least “h” citations. The h-index lends itself to being calculated over a person’s career or for specific time periods (e.g., the last 5 years) and thus can facilitate assessments of one’s performance over time. While h-index and other metrics each have their strengths and weaknesses (Bornmann & Daniel, 2007; Bornmann, Mutz, & Daniel, 2008; Hirsch, 2005, 2007), the increasing availability of public data and analytic tools make it easier to compute these metrics for many researchers, benchmark performance, and thus enable more objective and consistent comparisons within and across
[1]
Nir Menachemi,et al.
Journal Rankings by Health Management Faculty Members: Are There Differences by Rank, Leadership Status, or Area of Expertise?
,
2015,
Journal of healthcare management / American College of Healthcare Executives.
[2]
J. Rumsfeld,et al.
Insights from advanced analytics at the Veterans Health Administration.
,
2014,
Health affairs.
[3]
M. Pusic,et al.
Developing the role of big data and analytics in health professional education
,
2014,
Medical teacher.
[4]
T. Murdoch,et al.
The inevitable application of big data to health care.
,
2013,
JAMA.
[5]
E. Fong,et al.
Coercive Citation in Academic Publishing
,
2012,
Science.
[6]
H. Stanley,et al.
Methods for measuring the citations and productivity of scientists across time and discipline.
,
2009,
Physical review. E, Statistical, nonlinear, and soft matter physics.
[7]
Themis Lazaridis,et al.
Ranking university departments using the mean h-index
,
2010,
Scientometrics.
[8]
Lokman I. Meho,et al.
Citation Counting, Citation Ranking, and h-Index of Human-Computer Interaction Researchers: A Comparison between Scopus and Web of Science
,
2008,
ArXiv.
[9]
Lutz Bornmann,et al.
Are there better indices for evaluation purposes than the h index? A comparison of nine different variants of the h index using data from biomedicine
,
2008,
J. Assoc. Inf. Sci. Technol..
[10]
Lokman I. Meho,et al.
Impact of data sources on citation counts and rankings of LIS faculty: Web of science versus scopus and google scholar
,
2007,
J. Assoc. Inf. Sci. Technol..
[11]
J. Hirsch.
Does the h index have predictive power?
,
2007,
Proceedings of the National Academy of Sciences.
[12]
Lutz Bornmann,et al.
What do we know about the h index?
,
2007,
J. Assoc. Inf. Sci. Technol..
[13]
E. Garfield.
The history and meaning of the journal impact factor.
,
2006,
JAMA.
[14]
Marjori Matzke,et al.
F1000Prime recommendation of An index to quantify an individual's scientific research output.
,
2005
.
[15]
Mônica G. Campiteli,et al.
An index to quantify an individual's scientific research valid across disciplines
,
2005,
physics/0509048.
[16]
Pantelis Kalaitzidakis,et al.
RANKINGS OF ACADEMIC JOURNALS AND INSTITUTIONS IN ECONOMICS
,
2003
.
[17]
M. Porta,et al.
Rating journals in health care administration.
,
1995,
Medical Care.
[18]
C. H. Brooks,et al.
Rating Journals in Health Care Administration: The Perceptions of Program Chairpersons
,
1991,
Medical care.