Not Read, but Nevertheless Solved? Three Experiments on PIRLS Multiple Choice Reading Comprehension Test Items

Multiple-choice (MC) reading comprehension test items comprise three components: text passage, questions about the text, and MC answers. The construct validity of this format has been repeatedly criticized. In three between-subjects experiments, fourth graders (N 1 = 230, N 2 = 340, N 3 = 194) worked on three versions of MC items from the Progress in International Reading Literacy Study 2001 reading comprehension test with relevant components successively deleted: “original version” (text, questions, MC-answers), “version without text” (questions, MC-answers), “version without text and without questions” (only MC-answers). Answering correctly the MC items became more difficult as the relevant information was eliminated. In the two narrative fictional texts presented, the students' performance of the version without text was not better than chance. Conversely in the informational (fictional) text, the students' performance of the version without text was better than chance. In the third condition, students' performance was never better than chance.

[1]  W. Kintsch,et al.  Strategies of discourse comprehension , 1983 .

[2]  P. David Pearson,et al.  Text Genre and Science Content: Ease of Reading, Comprehension, and Reader Preference , 2009 .

[3]  R. Linn Educational measurement, 3rd ed. , 1989 .

[4]  T. Haladyna Developing and validating multiple-choice test items, 3rd ed. , 2004 .

[5]  Fred Pyrczak,et al.  Objective Evaluation of the Quality of Multiple-Choice Test Items. , 1972 .

[6]  Janice M. Keenan,et al.  Comprehending the Gray Oral Reading Test Without Reading It: Why Comprehension Tests Should Not Include Passage-Independent Items , 2006 .

[7]  Donald E. Powers,et al.  Answering the New SAT Reading Comprehension Questions Without the Passages , 1995 .

[8]  Howard Wainer,et al.  ON THE RELATIVE VALUE OF MULTIPLE‐CHOICE, CONSTRUCTED‐RESPONSE, AND EXAMINEE‐SELECTED ITEMS ON TWO ACHIEVEMENT TESTS1 , 1993 .

[9]  P. Lachenbruch Statistical Power Analysis for the Behavioral Sciences (2nd ed.) , 1989 .

[10]  Effect of word associations on reading speed, recall, and guessing behavior of tests. , 1968 .

[11]  T. Haladyna Developing and Validating Multiple-Choice Test Items , 1994 .

[12]  R. Marsh,et al.  Answering Quasi-Randomized Reading Items without the Passages on the SAT-I. , 2001 .

[13]  Brenda Hannon,et al.  Using working memory theory to investigate the construct validity of multiple-choice reading comprehension tests such as the SAT. , 2001, Journal of experimental psychology. General.

[14]  Jörn R. Sparfeldt,et al.  Leseverständnis ohne Lesen , 2007 .

[15]  F. Pyrczak Passage-Dependence of Items Designed to Measure the Ability to Identify the Main Ideas of Paragraphs: Implications for Validity , 1974 .

[16]  P. David Pearson,et al.  The Assessment of Reading Comprehension: A Review of Practices—Past, Present, and Future , 2005 .

[17]  J. Jaap Tuinman,et al.  Determining the Passage Dependency of Comprehension Questions in 5 Major Tests. , 1973 .

[18]  F. Pyrczak Context-Independence of Items Designed to Measure the Ability to Derive the Meanings of Words from their Context , 1976 .

[19]  G. Lautenschlager,et al.  The SAT Reading Task in Question: Reply to Freedle and Kostin , 1995 .

[20]  A. R. Jonckheere,et al.  A DISTRIBUTION-FREE k-SAMPLE TEST AGAINST ORDERED ALTERNATIVES , 1954 .

[21]  Michael C. Rodriguez Construct Equivalence of Multiple-Choice and Constructed-Response Items: A Random Effects Synthesis of Correlations , 2003 .

[22]  Randy G. Floyd,et al.  Differential Competencies Contributing to Children's Comprehension of Narrative and Expository Texts , 2008 .

[23]  Peter H. Johnston,et al.  Reading Comprehension Assessment: A Cognitive Basis , 1983 .

[24]  Stuart Katz,et al.  Answering Reading Comprehension Items without Passages on the Sat when Items are Quasi-Randomized , 1991 .

[25]  Marcia C. Linn,et al.  An Investigation of Explanation Multiple-Choice Items in Science Assessment , 2011 .

[26]  Passage Independence in Reading Achievement Tests: A Follow-up , 1984 .

[27]  Susan E. Israel,et al.  Handbook of research on reading comprehension , 2009 .

[28]  Michael B. W. Wolfe,et al.  Learning and memory of factual content from narrative and expository text. , 2007, The British journal of educational psychology.

[29]  R. C. Preston Ability of Students to Identify Correct Responses before Reading , 1964 .

[30]  Norman Edward Gronlund,et al.  Assessment of student achievement , 1997 .

[31]  L. Leslie,et al.  Formal and Informal Measures of Reading Comprehension , 2014 .

[32]  Priscilla A. Drum,et al.  The Effects of Surface Structure Variables on Performance in Reading Comprehension Tests. , 1981 .

[33]  Jason M Nelson,et al.  Passageless Comprehension on the Nelson-Denny Reading Test: Well Above Chance for University Students , 2010, Journal of learning disabilities.

[34]  Jacob Cohen Statistical Power Analysis for the Behavioral Sciences , 1969, The SAGE Encyclopedia of Research Design.

[35]  Danielle S. McNamara,et al.  A multidimensional framework to evaluate reading assessment tools. , 2007 .

[36]  Stuart Katz,et al.  Answering Reading Comprehension Items without the Passages on the SAT–I , 1999 .