Language tests have until now followed one or other of two strategies, focusing either on the difficulty of the questions or on the quality of the performance, but never on both. This article describes the use of the partial credit form of the Rasch model in the analysis and calibration of a set of writing tasks. Qualitative assessments of components of writing competence are rescaled to take into account the empirical difficulty of each grade in each assessment, in order to pro vide more generalisable measures of writing ability. To exploit this kind of test analysis in assessing language production, it is necessary that the tasks be carefully controlled and that the assessment scales and criteria be adapted to suit the specific demands of each task. The analysis shows the pattern of grade difficulties across task competents and this pattern is dis cussed in some detail. For these tasks at least, it appears that the difficulty of minimally engaging with a task is highly task specific whereas the difficulty of achieving competence in the task depends more on competence in the compo nents of writing skill.
[1]
Georg Rasch,et al.
Probabilistic Models for Some Intelligence and Attainment Tests
,
1981,
The SAGE Encyclopedia of Research Design.
[2]
D. Andrich.
Scaling Attitude Items Constructed and Scored in the Likert Tradition
,
1978
.
[3]
Kevin F. Collis,et al.
Evaluating the Quality of Learning: The SOLO Taxonomy
,
1977
.
[4]
Rosemary Baker,et al.
Item response theory
,
1985
.
[5]
Ina V. S. Mullis.
The Primary Trait System for Scoring Writing Tasks.
,
1976
.
[6]
Grant Henning,et al.
A self-rating scale of English difficulty: Rasch scalar analysis of items and rating categories
,
1985
.
[7]
Lester Faigley,et al.
Assessing writers' knowledge and processes of composing
,
1985
.
[8]
M. Swain,et al.
THEORETICAL BASES OF COMMUNICATIVE APPROACHES TO SECOND LANGUAGE TEACHING AND TESTING
,
1980
.