Effectiveness of eye movement modeling examples in problem solving: The role of verbal ambiguity and prior knowledge

Abstract Eye movement modeling examples (EMME) are video modeling examples with the model's eye movements superimposed. Thus far, EMME on problem-solving tasks seem to be effective for guiding students' attention, but this does not translate into higher learning outcomes. We therefore investigated the role of ambiguity of the verbal explanation and prior knowledge in the effectiveness of EMME on geometry problems. In Experiment 1, 57 university students observed EMME or regular video modeling examples (ME) with ambiguous verbal explanations. Eye-tracking data revealed that –as in prior research with unambiguous explanations- EMME successfully guided students' attention but did not improve test performance, possibly due to students' high prior knowledge. Therefore, Experiment 2, was conducted with 108 secondary education students who had less prior knowledge, using a 2 (EMME/ME) x 2 (ambiguous/unambiguous explanations) between-subjects design. Verbal ambiguity did not affect learning, but students in the EMME conditions outperformed those in the ME conditions.

[1]  S. Tipper,et al.  Gaze cueing of attention: visual attention, social cognition, and individual differences. , 2007, Psychological bulletin.

[2]  N. Charness,et al.  The perceptual aspect of skilled performance in chess: Evidence from eye movements , 2001, Memory & cognition.

[3]  Jeffrey N. Rouder,et al.  Bayesian inference for psychology. Part I: Theoretical advantages and practical ramifications , 2017, Psychonomic Bulletin & Review.

[4]  Patrik Pluchino,et al.  Eye-movement modeling of integrative reading of an illustrated text: Effects on processing and learning , 2015 .

[5]  F. Paas,et al.  Can the cognitive load approach make instructional animations more effective , 2007 .

[6]  Halszka Jarodzka,et al.  Teacher vision: expert and novice teachers’ perception of problematic classroom management scenes , 2016 .

[7]  Katharina Scheiter,et al.  Signaling text-picture relations in multimedia learning: A comprehensive meta-analysis , 2016 .

[8]  Mark R. Blair,et al.  Errors, efficiency, and the interplay between attention and category learning , 2009, Cognition.

[9]  Katharina Scheiter,et al.  Attention guidance during example study via the model's eye movements , 2009, Comput. Hum. Behav..

[10]  Mary Hegarty,et al.  Effects of knowledge and display design on comprehension of complex graphics , 2010 .

[11]  T. Crawford,et al.  Viewing another person's eye movements improves identification of pulmonary nodules in chest x-ray inspection. , 2010, Journal of experimental psychology. Applied.

[12]  Mary Hegarty,et al.  Thinking about the weather: How display salience and knowledge affect performance in a graphic inference task. , 2010, Journal of experimental psychology. Learning, memory, and cognition.

[13]  John Sweller,et al.  Cognitive Load Theory , 2020, Encyclopedia of Education and Information Technologies.

[14]  John Sweller,et al.  The Redundancy Principle in Multimedia Learning , 2005, The Cambridge Handbook of Multimedia Learning.

[15]  T. Gog,et al.  Uncovering Expertise-Related Differences in Troubleshooting Performance: Combining Eye Movement and Concurrent Verbal Protocol Data , 2005 .

[16]  Richard E. Mayer,et al.  Cognitive Theory of Multimedia Learning , 2021, The Cambridge Handbook of Multimedia Learning.

[17]  T. Gog The signaling (or cueing) principle in multimedia learning , 2014 .

[18]  Richard E. Mayer,et al.  The Cambridge Handbook of Multimedia Learning: Cognitive Theory of Multimedia Learning , 2005 .

[19]  M. Tanenhaus,et al.  Looking at the rope when looking for the snake: Conceptually mediated eye movements during spoken-word recognition , 2005, Psychonomic bulletin & review.

[20]  Tim van Marlen,et al.  Effects of visual complexity and ambiguity of verbal instructions on target identification , 2019 .

[21]  T. Gog,et al.  Example-Based Learning: Integrating Cognitive and Social-Cognitive Research Perspectives , 2010 .

[22]  Max M. Louwerse,et al.  Effects of Ambiguous Gestures and Language on the Time Course of Reference Resolution , 2010, Cogn. Sci..

[23]  Anand K. Gramopadhye,et al.  Gaze-augmented think-aloud as an aid to learning , 2012, CHI.

[24]  Andreas Gegenfurtner,et al.  Effects of eye movement modeling examples on adaptive expertise in medical image diagnosis , 2017, Comput. Educ..

[25]  A. Paivio,et al.  Dual coding theory and education , 1991 .

[26]  Daniel C. Richardson,et al.  The Art of Conversation Is Coordination , 2007, Psychological science.

[27]  Tamara van Gog,et al.  Showing a model's eye movements in examples does not improve learning of problem-solving tasks , 2016, Comput. Hum. Behav..

[28]  Christopher A. Dickinson,et al.  Coordinating cognition: The costs and benefits of shared gaze during collaborative search , 2008, Cognition.

[29]  T. Gog,et al.  Learning to see: Guiding students' attention via a Model's eye movements fosters learning , 2013 .

[30]  Alexander Renkl,et al.  Toward an Instructionally Oriented Theory of Example-Based Learning , 2014, Cogn. Sci..

[31]  Jacob Cohen Statistical Power Analysis for the Behavioral Sciences , 1969, The SAGE Encyclopedia of Research Design.

[32]  H. Haider,et al.  Eye movement during skill acquisition: More evidence for the information-reduction hypothesis , 1999 .

[33]  Paul A. Kirschner,et al.  Identification of effective visual problem solving strategies in a complex visual domain , 2014 .

[34]  K. Scheiter,et al.  Conveying clinical reasoning based on visual observation via eye-movement modelling examples , 2012, Instructional Science.

[35]  Julie C. Sedivy,et al.  Eye movements as a window into real-time spoken language comprehension in natural contexts , 1995, Journal of psycholinguistic research.

[36]  Patrik Pluchino,et al.  Using eye-tracking technology as an indirect instruction tool to improve text and picture processing and learning , 2016, Br. J. Educ. Technol..

[37]  G. Altmann Language can mediate eye movement control within 100 milliseconds, regardless of whether there is anything to move the eyes to , 2011, Acta psychologica.

[38]  Paul D. Allopenna,et al.  Tracking the Time Course of Spoken Word Recognition Using Eye Movements: Evidence for Continuous Mapping Models , 1998 .