Over the last two decades, research has suggested that candidates' test performances and scores are collaboratively achieved through interviewing/scoring processes and there could be unfair situations caused by the inter-interviewer variation. To investigate a precise picture of the impact of inter-interviewer variation, this research examines the variability of interviewer behaviour, its influence on a candidate's performance and raters' consequent perceptions of the candidate's ability on analytical rating scales (for example, pronunciation, grammar, fluency). The data are collected from two interview sessions involving the same candidate with two different interviewers, and the video-taped interviews are rated by 22 raters on five marking categories. The results show that a significantly different score was awarded to ‘pronunciation’ and ‘fluency’ in the two interviews. The reasons for the differences are discussed based on conversation analysis findings. This paper concludes with suggestions as to how the potential unfairness caused by interviewer variability could be solved.
[1]
Annie Brown,et al.
Interviewer style and candidate performance in the IELTS oral interview
,
1998
.
[2]
Glenn Fulcher,et al.
Does thick description lead to smart tests? A data-based approach to rating scale construction
,
1996
.
[3]
Annie Brown,et al.
Interviewer variation and the co-construction of speaking proficiency
,
2003
.
[4]
Anne Lazaraton,et al.
Interlocutor support in oral proficiency interviews: the case of CASE
,
1996
.
[5]
Gabriele Kasper,et al.
Two Ways of Defining Communication Strategies.
,
1984
.
[6]
Steven J. Ross,et al.
The Discourse of Accommodation in Oral Proficiency Interviews
,
1992,
Studies in Second Language Acquisition.