The quality of peer reviews is a longstanding issue within the Design Society with concerns over the consistency and transparency of reviews raised frequently. Previous research has sought to qualify these concerns by describing the variability of review scores and correlating them with academic’s backgrounds. This paper aims to update and advance the current understanding of peer review within the Design Society by characterising review behaviour through the addition of eye tracking. Seventeen academics attending Design 2014 took part in an experiment. The results of the experiment are discussed in this paper with the aim of answering two research questions: do different review strategies exist and what are they? And, do character traits of reviewers affect reviewer strategy? Results confirm findings from previous research, suggesting little has changed since the topic was last reported and that inconsistency remains a problem. However, some of the cause of review inconsistency is potentially explainable through identified review strategies evident from eye tracking data.