Quality in Engineering Education Research: arriving at consensus

Background: Arguably the most important opportunity to acquire the standards and norms of any discipline and develop researchers' judgement is the peer review process - and this is probably particularly true in an emerging discipline such as engineering education. Ironically, research in many disciplines has established that the review process is deeply flawed in conception as well as (often) in operation, with the American Medical Association asserting that if peer review were a drug it would never be allowed on to the market. And yet university ranking systems for published research, on which all of our careers depend, rely on this flawed instrument. With this in mind we have been examining how members of our community (AAEE) give and respond to reviews with a view to making the process more useful. Purpose: Reviewing is an inexact and subjective process so it would be misguided to think that somehow interrater reliability or some notion of objective 'truth' may be attained. Instead, we ask what reviewers need to do to provide helpful advice that can help shape norms and standards in the field. Design/Method: In previous work (Willey et. al. 2011; Jolly et.al. 2011) there appeared to be a need for well-expressed criteria that would guide authors on what a publication should contain and guide reviewers in how judgements should be made. With the help of a Delphi panel made up of 12 international researchers in the field a set of criteria were developed. Volunteers were then sought to apply the criteria to sample texts in an online tool (SPARK plus). Individual interviews with some respondents were then used to clarify participant's understandings and goals. Results: The criteria developed by the Delphi panel are those being used for this conference. The members of the panel particularly approved the 'comments' accompanying the criteria per se which were intended primarily as guidance to authors about acceptable practice. Anecdotal evidence to date suggests that authors should find the criteria and comments clarify expectations but the matter of standards will remain. The use of the criteria in the second stage and analysis of the discussions in particular will produce information both about present expectations and practices and visions of future growth and improvement. Conclusions: Our analysis of the stage 2 data will aim to describe consensus on research quality and how to use the peer review process to help attain it, in the form of recommendations for future application of the criteria, in journals as well as at conferences. Our international experts from the Delphi panel have expressed an interest in being involved in stage 2 and informed about the outcomes so the potential also exists for this community to develop best practice peer review in engineering education thorough the sharing of their expertise in this way.