Rapid extraction of event participants in caused motion events

When viewing a complex event, it is necessary to identify and calculate the relationships between different entities in the event. For example, when viewing a caused motion event (e.g. a man raking leaves into a basket.), people need to identify the Agent (man), the affected object or Patient (leaves), the Instrument (rake) and the Goal (basket). In this paper we explore how this process of event apprehension proceeds using eye-tracking methodology. Our study indicates that viewers extract event components rapidly, but some components can be extracted faster than others. Moreover, there is a structure to saccade patterns when participants are asked to identify specific event components. In caused motion events, attention is allocated to the Patient during the early stages of processing even when the Patient is not the target. We discuss implications of this work for how people perceive complex events.

[1]  Antonio Torralba,et al.  Modeling the Shape of the Scene: A Holistic Representation of the Spatial Envelope , 2001, International Journal of Computer Vision.

[2]  Jean-Pierre Koenig,et al.  Arguments for adjuncts , 2003, Cognition.

[3]  Irving Biederman,et al.  Visual object recognition , 1993 .

[4]  Zenzi M. Griffin,et al.  PSYCHOLOGICAL SCIENCE Research Article WHAT THE EYES SAY ABOUT SPEAKING , 2022 .

[5]  Pienie Zwitserlood,et al.  Describing scenes hardly seen. , 2007, Acta psychologica.

[6]  R. Abrams,et al.  Motion Onset Captures Attention , 2003, Psychological science.

[7]  Alistair Knott,et al.  Eye Movements during Transitive Action Observation Have Sequential Structure , 2022 .

[8]  M. Potter Meaning in visual search. , 1975, Science.

[9]  Damon Tutunjian,et al.  Do We Need a Distinction between Arguments and Adjuncts? Evidence from Psycholinguistic Studies of Comprehension , 2008, Lang. Linguistics Compass.

[10]  David R. Dowty Thematic proto-roles and argument selection , 1991 .

[11]  Julie E. Boland Visual arguments , 2005, Cognition.

[12]  Antonio Torralba,et al.  Top-down control of visual attention in object detection , 2003, Proceedings 2003 International Conference on Image Processing (Cat. No.03CH37429).

[13]  Anna Papafragou,et al.  Source-Goal Asymmetries in Motion Representation: Implications for Language Production and Comprehension , 2010, Cogn. Sci..

[14]  R. Baayen,et al.  Mixed-effects modeling with crossed random effects for subjects and items , 2008 .

[15]  B. Landau,et al.  Starting at the end: the importance of goals in spatial language , 2005, Cognition.

[16]  D. Barr Analyzing ‘visual world’ eyetracking data using multilevel logistic regression , 2008 .

[17]  Julie E. Boland,et al.  Lexical constraints and prepositional phrase attachment , 1998 .