Characterising Authors on the Extent of their Paper Acceptance: A Case Study of the Journal of High Energy Physics

New researchers are usually very curious about the recipe that could accelerate the chances of their paper getting accepted in a reputed forum (journal/conference). In search of such a recipe, we investigate the profile and peer review text of authors whose papers almost always get accepted at a venue (Journal of High Energy Physics in our current work). We find authors with high acceptance rate are likely to have a high number of citations, high h-index, higher number of collaborators etc. We notice that they receive relatively lengthy and positive reviews for their papers. In addition, we also construct three networks -- co-reviewer, co-citation and collaboration network and study the network-centric features and intra- and inter-category edge interactions. We find that the authors with high acceptance rate are more 'central' in these networks; the volume of intra- and inter-category interactions are also drastically different for the authors with high acceptance rate compared to the other authors. Finally, using the above set of features, we train standard machine learning models (random forest, XGBoost) and obtain very high class wise precision and recall. In a followup discussion we also narrate how apart from the author characteristics, the peer-review system might itself have a role in propelling the distinction among the different categories which could lead to potential discrimination and unfairness and calls for further investigation by the system admins.

[1]  A. Bayer,et al.  Some Correlates of a Citation Measure of Productivity in Science , 1966 .

[2]  Jeroen Smits,et al.  Duration and quality of the peer review process: the author’s perspective , 2017, Scientometrics.

[3]  Niloy Ganguly,et al.  Anomalies in the Peer-review System: A Case Study of the Journal of High Energy Physics , 2016, CIKM.

[4]  Yiming Zhao,et al.  Understanding success through the diversity of collaborators and the milestone of career , 2018, J. Assoc. Inf. Sci. Technol..

[5]  AbramoGiovanni,et al.  How do you define and measure research productivity , 2014 .

[6]  Niloy Ganguly,et al.  Influence of Reviewer Interaction Network on Long-Term Citations: A Case Study of the Scientific Peer-Review System of the Journal of High Energy Physics , 2017, 2017 ACM/IEEE Joint Conference on Digital Libraries (JCDL).

[7]  Andreas Neef,et al.  Gender bias in scholarly peer review , 2017, eLife.

[8]  Peter Fedor,et al.  A tribute to Claude Shannon (1916-2001) and a plea for more rigorous use of species richness, species diversity and the 'Shannon-Wiener' Index , 2003 .

[9]  Min Zhang,et al.  Reviewer bias in single- versus double-blind peer review , 2017, Proceedings of the National Academy of Sciences.

[10]  J. E. Hirsch,et al.  An index to quantify an individual's scientific research output , 2005, Proc. Natl. Acad. Sci. USA.

[11]  R. Spellecy,et al.  Bias in the Peer Review Process: Can We Do Better? , 2019, Obstetrics and gynecology.

[12]  Tianqi Chen,et al.  XGBoost: A Scalable Tree Boosting System , 2016, KDD.

[13]  Leo Breiman,et al.  Random Forests , 2001, Machine Learning.

[14]  T. Jefferson,et al.  Measuring the quality of editorial peer review. , 2002, JAMA.

[15]  Laura J. Falkenberg,et al.  Reviewing Reviews: An Evaluation of Peer Reviews of Journal Article Submissions , 2018 .