Is the test constructor a facet?

Most modern language testers believe that the writing of a successful instrument begins with specifications, and that instruments constructed devoid of specifications are likely to go astray. The purpose of this study was to explore the relative impact of the test-developer on the performance of test-takers using multiple-choice reading comprehension tests that had no specifications. Traditional reading comprehension tests often consist of short prose passages followed by sets of multiple-choicecomprehension questions. The test constructor formulates the stem and the choices on the basis of his or her own renditions of the passage. The test-takers provide evidence of their comprehension in terms of their written responses to the items designed by the test constructor. Since the characteristics of the test method limit the responses the testees can provide, the expected response becomes part of the test method. Accordingly, it seems reasonable to suggest that there may be a facet associated with the test-developer.Six passages each with 3 different sets of multiple-choice items constructed by 3 (groups of) individuals were trialled on 335 Iranian EFL (English as a foreign language) learners. The results revealed differential performance on almost all sets, suggesting a test constructor effect.

[1]  Darlene F. Wolf,et al.  A Comparison of Assessment Tasks Used to Measure FL Reading Comprehension , 1993 .

[2]  Lyle F. Bachman 语言测试要略 = Fundamental considerations in language testing , 1990 .

[3]  B. Davey Factors Affecting the Difficulty of Reading Comprehension Items for Successful and Unsuccessful Readers , 1988 .

[4]  Thom Hudson,et al.  The effects of induced schemata on the "short circuit" in L2 reading: Non‐decoding factors in L2 reading performance. , 1982 .

[5]  Kyle Perkins,et al.  Predicting item difficulty in a reading comprehension test with an artificial neural network , 1995 .

[6]  Lyle F. Bachman Performance on Cloze Tests with Fixed‐Ratio and Rational Deletions , 1985 .

[7]  K. Perkins The effect of passage topical structure types on ESL reading comprehension difficulty , 1992 .

[8]  D. Harris Testing English as a second language , 1969 .

[9]  Elana Shohamy Does the testing method make a difference? The case of reading comprehension , 1984 .

[10]  Janet S. Twyman,et al.  Teaching Reading Comprehension , 1978 .

[11]  M. Sasaki Effects of cultural schemata on students' test-taking processes for cloze tests: a multiple data source approach , 2000 .

[12]  Lyle F. Bachman Modern language testing at the turn of the century: assuring that what we count counts , 2000 .

[13]  Patricia Johnson,et al.  Effects on Reading Comprehension of Building Background Knowledge. , 1982 .

[14]  P. M. C. De Boer,et al.  Item type comparisons of language comprehension tests , 2000 .

[15]  Priscilla A. Drum,et al.  The Effects of Surface Structure Variables on Performance in Reading Comprehension Tests. , 1981 .

[16]  R. Freedle,et al.  Does the text matter in a multiple-choice test of comprehension? the case for the construct validity of TOEFL's minitalks , 1999 .

[17]  The use of test method characteristics in the content analysis and design of EFL proficiency tests , 1996 .

[18]  J. Alderson THE CLOZE PROCEDURE AND PROFICIENCY IN ENGLISH AS A FOREIGN LANGUAGE , 1979 .

[19]  Andrew Harrison A Language Testing Handbook , 1983 .

[20]  J. Algina,et al.  Cognitive Assessment of Language and Math Outcomes , 1990 .

[21]  Joann Hammadou,et al.  Interrelationships among Prior Knowledge, Inference, and Language Proficiency in Foreign Language Reading , 1991 .

[22]  J. Charles Alderson,et al.  Evaluating second language education , 1993 .

[23]  Mark A. Clarke,et al.  TOWARD A REALIZATION OF PSYCHOLINGUISTIC PRINCIPLES IN THE ESL READING CLASS , 1977 .

[24]  Kyle Perkins,et al.  An exploratory study into the construct validity of a reading comprehension test: triangulation of data sources , 1991 .

[25]  P. Carrell,et al.  Schema Theory and ESL Reading Pedagogy. , 1983 .

[26]  W. Kintsch,et al.  Role of rhetorical structure in text comprehension. , 1982 .

[27]  Four Measures of Topical Knowledge: A Study of Construct Validity. , 1990 .

[28]  Lyle F. Bachman,et al.  Language testing in practice : designing and developing useful language tests , 1996 .

[29]  Stuart Katz,et al.  Answering Reading Comprehension Items without the Passages on the SAT–I , 1999 .

[30]  R. Freedle,et al.  The prediction of TOEFL reading item difficulty: implications for construct validity , 1993 .

[31]  Caroline Clapham,et al.  The Development of IELTS: A Study of the Effect of Background Knowledge on Reading Comprehension , 1996 .

[32]  Brian Lynch Evaluating Second Language Education: Evaluating a program inside and out , 1992 .

[33]  Michael Rube Redfield,et al.  Language Test Construction and Evaluation , 1997 .

[34]  James M. Royer The sentence verification technique: A new direction in the assessment of reading comprehension. , 1990 .

[35]  Terri Gullickson,et al.  A Fair Test? Assessment, Achievement and Equity. , 1996 .

[36]  G. L. Riley,et al.  A comparison of recall and summary protocols as measures of second language reading comprehension , 1996 .

[37]  J. Alderson,et al.  The effect of students' academic discipline on their performance on ESP reading tests , 1985 .

[38]  A. Hughes Testing for language teachers , 1989 .

[39]  James F. Lee Background Knowledge & L2 Reading , 1986 .

[40]  John W. Oller,et al.  Background and culture as factors in EFL reading comprehension , 1989 .

[41]  John H. A. L. de Jong,et al.  Individualizing the assessment of language abilities , 1992 .

[42]  James F. Lee Comprehending the Spanish Subjunctive: An Information Processing Perspective , 1987 .

[43]  The Interaction of Reader and Task Factors in the Assessment of Reading Comprehension. , 1984 .

[44]  B. Buunk,et al.  Sex differences in foreign language text comprehension : The role of interests and prior knowledge , 1996 .