What Do Editors Maximize? Evidence from Four Leading Economics Journals

We study editorial decision-making using anonymized submission data for four leading economics journals: the Journal of the European Economics Association, the Quarterly Journal of Economics, the Review of Economic Studies, and the Review of Economics and Statistics. We match papers to the publication records of authors at the time of submission and to subsequent Google Scholar citations. To guide our analysis we develop a benchmark model in which editors maximize the expected quality of accepted papers and citations are unbiased measures of quality. We then generalize the model to allow different quality thresholds for different papers, and systematic gaps between citations and quality. Empirically, we find that referee recommendations are strong predictors of citations, and that editors follow the recommendations quite closely. Holding constant the referees' evaluations, however, papers by highly-published authors get more citations, suggesting that referees impose a higher bar for these authors, or that prolific authors are over-cited. Editors only partially offset the referees' opinions, effectively discounting the citations of more prolific authors in their revise and resubmit decisions by up to 80%. To disentangle the two explanations for this discounting, we conduct a survey of specialists, asking them for their preferred relative citation counts for matched pairs of papers. The responses show no indication that prolific authors are over-cited and thus suggest that referees and editors seek to support less prolific authors.

[1]  P. Seglen,et al.  Education and debate , 1999, The Ethics of Public Health.

[2]  D. McFadden Conditional logit analysis of qualitative choice behavior , 1972 .

[3]  Kala Krishna,et al.  The Inside Scoop: Acceptance and Rejection at the Journal of International Economics , 2008 .

[4]  R. Jackson,et al.  The Matthew Effect in Science , 1988, International journal of dermatology.

[5]  J. Wooldridge Control Function Methods in Applied Econometrics , 2015, The Journal of Human Resources.

[6]  Marshall H. Medoff,et al.  Editorial Favoritism in Economics , 2003 .

[7]  Expertise vs . Bias in Evaluation : Evidence from the NIH ∗ , 2013 .

[8]  Joseph Engelberg,et al.  Networks and Productivity: Causal Evidence from Editor Rotations , 2012 .

[9]  Christiana E. Hilmer,et al.  Fame and the Fortune of Academic Economists: How the Market Rewards Influential Research in Economics , 2015, SSRN Electronic Journal.

[10]  M. Kocher,et al.  Single-blind vs Double-blind Peer Review in the Setting of Author Prestige. , 2016, JAMA.

[11]  Marshall H. Medoff,et al.  Evidence of a Harvard and Chicago Matthew Effect , 2006 .

[12]  Robert Hofmeister,et al.  How Do Editors Select Papers, and How Good are They at Doing It? , 2011 .

[13]  Daniel S. Hamermesh,et al.  Facts and Myths about Refereeing , 1994 .

[14]  Glenn Ellison The Slowdown of the Economics Publishing Process , 2000 .

[15]  Ivo Welch,et al.  Referee Recommendations , 2013 .

[16]  Cassidy R. Sugimoto,et al.  Bias in peer review , 2013, J. Assoc. Inf. Sci. Technol..

[17]  Damien Besancenot,et al.  Desk rejection in an academic publication market model with matching frictions , 2009 .

[18]  R. Merton,et al.  Patterns of evaluation in science: Institutionalisation, structure and functions of the referee system , 1971 .

[19]  Glenn Ellison,et al.  How Does the Market Use Citation Data? The Hirsch Index in Economics , 2010, SSRN Electronic Journal.