Two main issues are still at stake for the transcription of gestures (i.e., co speech gestures and SL signs): i/ the time used to transcribe; and ii/ a sole system for these two kinds of transcriptions (Slobin et al., 2001). The time needed to annotate gestural phenomena restricts drastically the size of the corpus and therefore the possible generalizations; on the other side, the lack of a common transcription system makes difficult any comparison across studies of sign or gesture forms (see the description of “pointing gestures with a functional point of view” [Kita, 2003] and of “buoys in SL” [Liddell, 2003]) and means of expression (for example, many co-speech gestures introduced in SL discourse). This fact increases the differences between a “genuine” linguistic production and the co-speech gestures (Schembri et al., 2005; Goldin-Meadow & Brentari, 2017) and sometimes even essentializing these differences (Singleton et al., 1995; McNeill, 2015).
A formal transcription system designed for SL (developed by the Typannot project) - but available for co-speech gestures as well - depicts each segment (hand, forearm, arm and shoulder) of the upper limb, palm orientation, handshapes and non-manual parameters (including mouth actions, facial expressions, head and torso positions). Each articulator is associated to a different OpenType font (e.g., Typannot_HandShape font, Typannot_MouthAction font, etc.) belonging to a specific font family (Boutet et al., 2018). Every character (in Unicode sense) of a Typannot_font contains only one specific information (for example, fingers, shape, angle and closeness between fingers are provided for handshapes; Bianchini et al., 2018) necessary to describe all the features of an articulator. An advanced system of typographic ligature allows the user to see a single “holistic” glyph containing every feature, assuring searchability in a readable way.
Because of their formal approach, using these Typannot_fonts allows: i/ a comparison between SL and gestures/mimics in multimodal discourses; and ii/ in SL, detailed sub-parameter queries, e.g., about the angle of a segment according to a peculiar degree of freedom (e.g., full extension of the hand) and up to the relations among segments - it is even possible to discriminate pointing forms, regardless of a finger. A corpus of multimodal French discourses and French Sign Language (LSF), of several minutes each, has been transcribed (with ELAN) using the Typannot_fonts in manual and semi-automatic ways. This last kind of transcription in Typannot_fonts is made filming the speaker wearing an IMU device, and processing the video with OpenFace (Baltrusaitis et al., 2018).
In this talk, we will present this family fonts system, its structure, the annotated features, the possible queries made at two levels, the characters themselves and their ligatured glyphs. Some transcribed data and preliminary results will be exposed, focusing on the time used for transcription, by comparison of the ratio between manual and semi-automatic processing.
We propose, as a side event to the conference, to set up a tutorial session (3 hours, small group) to learn how to use the Typannot transcription environment.
BIBLIOGRAPHY
Baltrusaitis T., Zadeh A., Chong Lim Y, Morency L-P. 2018. OpenFace 2.0: facial behavior analysis toolkit. 13th IEEE Intl Conf. Automatic Face & Gesture Recognition (FG 2018): 59-66. doi.org/10.1109/FG.2018.00019
Bianchini C.S., Chevrefils L., Danet C., Doan P., Rebulard M., Contesse A., Boutet D. 2018. Coding movement in sign languages: the Typannot approach. Proc. ACM 5th Intl Conf. Movement and Computing (MoCo'18), sect. 1(#9): 1-8.
Boutet D., Doan P., Bianchini C.S., Danet C., Goguely T., Rebulard M. 2018. Systemes graphematiques et ecritures des langues signees. in “Signatures: (essais en) semiotique de l’ecriture” (J.M. Klinkenberg, S. Polis eds). Signata, 9: 391-426.
Goldin-Meadow S., Brentari D. 2017. Gesture, sign and language: the coming of age of sign language and gesture studies. Behavioral and Brain Sciences, 40(e46): 1-82.
Kita S. 2003. Pointing: a foundational building block of human communication. in "Pointing: where language, culture, and cognition meet" (S. Kita ed.). Erlbaum (Mahwah NJ): 1-8.
Liddell S.K. 2003. Grammar, gesture, and meaning in American Sign Language. Cambridge University Press.
McNeill D. (2015). Why we gesture. in: "Why we gesture: the surprising role of hand movements in communication". Cambridge University Press: 3-20. doi.org/10.1017/CBO9781316480526.002
Schembri A., Jones C., Burnham D. 2005. Comparing action gestures and classifier verbs of motion: evidence from Australian Sign Language, Taiwan Sign Language, and nonsigners’ gestures without speech. Journal of Deaf Studies and Deaf Education, 10(3), 272-290. doi.org/10.1093/deafed/eni029
Singleton J.L., Goldin-Meadow S., McNeill D. 1995. The cataclysmic break between gesticulation and sign: evidence against a unified continuum of gestural communication. in: "Language, gesture, and space" (K. Emmorey, J. Reilly eds). Hillsdale NJ: Lawrence Erlbaum Associates Publishers: 287-311.
Slobin DI., Hoiting N., Anthony M., Biederman Y., Kuntze M., Lindert R., Pyers J., Thumann H., Weinberg A. 2001. Sign language transcription at the level of meaning components: the Berkeley Transcription System (BTS). Sign Language & Linguistics 4(1-2): 63-104. doi.org/10.1075/sll.4.12.07slo