Online assessment of strategic reading literacy skills

This study investigates the possibility of assessing strategic reading literacy skills with computers. The critical value of this assessment is the recording of online indices of the reader's behavior that can be interpreted in terms of strategies. The study uses materials of a standardized paper-and-pencil reading literacy test called CompLEC (Llorens et?al., 2011) and a technology called Read&Answer (Vidal-Abarca et?al., 2011) that presents texts and questions with a masking procedure that allows the recording of reading time and readers' actions to develop a computer-based version called e-CompLEC. We found that reliability and validity of the two versions are largely equivalent, and that e-CompLEC provides self-regulation and reading behavior indices predictive of performance. The study also shows how self-regulation is an important component of reading literacy processes. We test the possibility of assessing reading literacy skills automatically.The assessment includes performance and online indices of the reader's behavior.Our computer-based and paper-and-pencil assessments tools are largely equivalent.Online self-regulation and reading behavior indices predict performance scores.

[1]  Kate J. Garland,et al.  Paper-based versus computer-based assessment: is workload another test mode effect? , 2004, Br. J. Educ. Technol..

[2]  Sascha Schroeder,et al.  What Readers Have and Do: Effects of Students' Verbal Ability and Reading Time Components on Comprehension with and without Text Availability , 2011 .

[3]  Roy B. Clariana,et al.  Paper-based versus computer-based assessment: key factors associated with the test mode effect , 2002, Br. J. Educ. Technol..

[4]  Geoff N Masters,et al.  Measuring student knowledge and skills : the PISA 2000 assessment of reading, mathematical and scientific literacy , 2000 .

[5]  Michael Russell,et al.  Testing On Computers , 1999 .

[6]  Michael Russell Testing On Computers , 1999 .

[7]  Joseph P. Magliano,et al.  Chapter 9 Toward a Comprehensive Model of Comprehension , 2009 .

[8]  Øistein Anmarkrud,et al.  Task-oriented reading of multiple documents: online comprehension processes and offline products , 2013 .

[9]  Eduardo Vidal-Abarca,et al.  Impact of Question-Answering Tasks on Search Processes and Reading Comprehension. , 2009 .

[10]  Danielle S. McNamara,et al.  Influence of Question Format and Text Availability on the Assessment of Expository Text Comprehension , 2007 .

[11]  Douglas F. Becker,et al.  The Score Equivalence of Paper-and-Pencil and Computerized Versions of a Speeded Test of Reading Comprehension , 2002 .

[12]  Walter Kintsch,et al.  Comprehension: A Paradigm for Cognition , 1998 .

[13]  Tom Krenzke,et al.  Literacy, Numeracy, and Problem Solving in Technology-Rich Environments among U.S. Adults: Results from the Program for the International Assessment of Adult Competencies 2012. First Look. NCES 2014-008. , 2013 .

[14]  Matthew T. McCrudden,et al.  Relevance and Goal-Focusing in Text Processing , 2007 .

[15]  Russell S. Ende Reading for Understanding in Grades 7, 8, and 9. , 1971 .

[16]  Eduardo Vidal-Abarca,et al.  The Effects of Tasks on Integrating Information From Multiple Documents , 2008 .

[17]  Geoffrey B. Duggan,et al.  Text skimming: the process and effectiveness of foraging through text under time pressure. , 2009, Journal of experimental psychology. Applied.

[18]  R. Bennett Online Assessment and the Comparability of Score Meaning , 2003 .

[19]  Laura Gil,et al.  Papel de los procesos metacognitivos en una tarea de pregunta-respuesta contextos escritos , 2009 .

[20]  Eduardo Vidal-Abarca,et al.  Recording online processes in task-oriented reading with Read&Answer , 2011, Behavior research methods.

[21]  P. K. Murphy,et al.  Persuasion online or on paper: a new take on an old issue , 2003 .

[22]  A. Bakker,et al.  Suppressor Variables in Path Models , 2001 .

[23]  K. Rayner Eye movements in reading and information processing: 20 years of research. , 1998, Psychological bulletin.

[24]  Eduardo Vidal-Abarca,et al.  On-line Assessment of Comprehension Processes , 2009, The Spanish journal of psychology.

[25]  Thomas Hoffmann,et al.  Examining the Effect of Computer-Based Passage Presentation on Reading Test Performance , 2005 .

[26]  Joseph P. Magliano,et al.  Assessing comprehension during reading with the Reading Strategy Assessment Tool (RSAT) , 2011, Metacognition and learning.

[27]  Heiko Rölke,et al.  The time on task effect in reading and problem solving is moderated by task difficulty and skill: Insights from a computer-based large-scale assessment. , 2014 .

[28]  Eduardo Vidal-Abarca,et al.  Individual Differences for Self-Regulating Task-Oriented Reading Activities. , 2010 .

[29]  Jean-François Rouet,et al.  The Skills of Document Use: From Text Comprehension to Web-Based Learning , 2006 .

[30]  Daniel H. Robinson,et al.  Speed and Performance Differences among Computer-Based and Paper-Pencil Tests , 2004 .

[31]  P. Pamela,et al.  Recent Trends in Comparability Studies , 2005 .

[32]  Randy Elliot Bennett,et al.  Inexorable and Inevitable: The Continuing Story of Technology and Assessment , 2002 .

[33]  John Dunlosky,et al.  A revised methodology for research on metamemory: Pre-judgment Recall and Monitoring (PRAM). , 2004, Psychological methods.

[34]  Mabel A. b. Bessey,et al.  Reading for understanding , 1936 .

[35]  Johanna K. Kaakinen,et al.  Perspective Effects on Expository Text Comprehension: Evidence From Think-Aloud Protocols, Eyetracking, and Recall , 2005 .

[36]  T. Trabasso,et al.  Constructing inferences during narrative text comprehension. , 1994 .

[37]  Martina Ziefle,et al.  Investigating paper vs. screen in real-life hospital workflows: Performance contradicts perceived superiority of paper in the user experience , 2011, Int. J. Hum. Comput. Stud..

[38]  Eduardo Vidal-Abarca,et al.  Summary versus Argument Tasks when Working with Multiple Documents: Which Is Better for Whom?. , 2010 .

[39]  Eduardo Vidal-Abarca Gámez,et al.  Prueba de competencia lectora para educación secundaria (Complec) , 2011 .

[40]  Lyle F. Schoenfeldt,et al.  Guidelines for computer-based psychological tests and interpretations , 1989 .

[41]  Jacob Cohen,et al.  Applied multiple regression/correlation analysis for the behavioral sciences , 1979 .

[42]  Daniel J. Bernstein,et al.  An Examination of the Equivalence between Non-Adaptive Computer-Based and Traditional Testing , 2001 .

[43]  Jaeyool Boo,et al.  Comparability of a paper-based language test and a computer-based language test , 2003 .

[44]  T. Trabasso,et al.  Constructing inferences during narrative text comprehension. , 1994, Psychological review.

[45]  Andreas Schleicher,et al.  PISA 2009 Assessment Framework: Key Competencies in Reading, Mathematics and Science. , 2009 .

[46]  Hak Joon Kim,et al.  Reading from an LCD monitor versus paper: Teenagers’ reading performance , 2013 .

[47]  John Mazzeo Comparability of Computer and Paper-and-Pencil Scores for Two CLEP General Examinations. College Board Report No. 91-5. , 1991 .

[48]  Eduardo Vidal-Abarca,et al.  Evaluación de las estrategias y procesos de comprensión: el Test de Procesos de Comprensión , 2008 .

[49]  Li Yujian,et al.  A Normalized Levenshtein Distance Metric , 2007, IEEE Transactions on Pattern Analysis and Machine Intelligence.