What Comes First: Combining Motion Capture and Eye Tracking Data to Study the Order of Articulators in Constructed Action in Sign Language Narratives

We use synchronized 120 fps motion capture and 50 fps eye tracking data from two native signers to investigate the temporal order in which the dominant hand, the head, the chest and the eyes start producing overt constructed action from regular narration in seven short Finnish Sign Language stories. From the material, we derive a sample of ten instances of regular narration to overt constructed action transfers in ELAN which we then further process and analyze in Matlab. The results indicate that the temporal order of articulators shows both contextual and individual variation but that there are also repeated patterns which are similar across all the analyzed sequences and signers. Most notably, when the discourse strategy changes from regular narration to overt constructed action, the head and the eyes tend to take the leading role, and the chest and the dominant hand tend to start acting last. Consequences of the findings are discussed.

[1]  Tommi Jantunen,et al.  Synchronizing eye tracking and optical motion capture: How to bring them together , 2018 .

[2]  K. Cormier,et al.  Rethinking constructed action , 2015 .

[3]  Onno Crasborn,et al.  Enhanced ELAN functionality for sign language corpora , 2008, LREC 2008.

[4]  Kearsy Cormier,et al.  Reported speech as enactment , 2019, Linguistic Typology.

[5]  Robin L. Thompson,et al.  Eye gaze during comprehension of American Sign Language by native and beginning signers. , 2009, Journal of deaf studies and deaf education.

[6]  Markus Steinbach,et al.  Quotation in sign languages: A visible context shift , 2012 .

[7]  Jennifer Wehrmeyer,et al.  Eye-tracking Deaf and hearing viewing of sign language interpreted news broadcasts , 2014 .

[8]  Sotaro Kita,et al.  Movement Phase in Signs and Co-Speech Gestures, and Their Transcriptions by Human Coders , 1997, Gesture Workshop.

[9]  Petri Toiviainen,et al.  MOCAP TOOLBOX - A MATLAB TOOLBOX FOR COMPUTATIONAL ANALYSIS OF MOVEMENT DATA , 2013 .

[10]  Trevor Johnston,et al.  Elaborating Who's What: A Study of Constructed Action and Clause Structure in Auslan (Australian Sign Language) , 2014 .

[11]  Anna Puupponen Towards understanding nonmanuality: A semiotic treatment of signers’ head movements , 2019 .

[12]  Tommi Jantunen,et al.  Signs and Transitions: Do They Differ Phonetically and Does It Matter? , 2013 .

[13]  L. Ferrara,et al.  Language as Description, Indication, and Depiction , 2018, Front. Psychol..

[14]  Tommi Jantunen Constructed Action, the Clause and the Nature of Syntax in Finnish Sign Language , 2017 .

[15]  Tommi Jantunen,et al.  Head movements in Finnish Sign Language on the basis of Motion Capture data: A study of the form and function of nods, nodding, head thrusts, and head pulls , 2015 .

[16]  Martha E. Tyrone,et al.  Sign lowering and phonetic reduction in American Sign Language , 2010, J. Phonetics.