Not so snap a judgement: discussing the peer reputation metric

The lack of a clear method to judge a researcher's contribution has recently [1] led to the proposal of a new metric, called Peer Reputation (PR) metric. PR ties the selectivity of a publication venue with the reputation of the first author's institution. In [1], the authors compute PR for a number of networking research publication venues and argue that PR is a better indicator of selectivity than a venue's Acceptance Ratio (AR). We agree that PR is an idea towards the right direction and that it offers substantial information that is missing from AR. Still, we argue in this paper that PR is not adequate by itself in giving a solid evaluation of a researcher's contribution. In our study, we discuss and evaluate quantitatively the points on which PR does not sufficiently serve its purpose. To evaluate PR, we have gathered data for 11 conferences belonging to different research fields (networking, informatics and electronics), between 2008-2011. We also use three different rankings of doctoral programs in USA and two world university rankings, to study how they influence the PR results.