A comparison of sign language with speech plus gesture

In the introduction to his target article Schlenker writes that „sign languages provide overt evidence on crucial aspects of the Logical Form of sentences that are only inferred indirectly in spoken language“ (p. 3) and furthermore that „sign languages are strictly more expressive than spoken languages because iconic phenomena can be found at their logical core“ (p.3). He further argues that one of the possible conclusions that can be drawn from these facts is that „spoken languages have comparable expressive mechanisms, but only when co-speech gestures are taken into account“ (p.3), as Goldin-Meadow & Brentari (2017) have recently claimed. In the following, I will elaborate on this possibility. Following Goldin-Meadow and Brentari (2017), I will show by reference to examples from the main text that a close comparison of sign language and spoken language under controlled conditions will not only tell us more about the semantics of gestures in spoken languages and semantics in general, but will also shed light on the notion of gesture within sign languages. In section 6.1 Schlenker revisits some of the discussed sign language phenomena and draws parallels to cases of spoken language that is enriched by co-speech gestures. I will follow this path and discuss more such examples and the consequences that arise from this method. In particular, I will compare Role Shift in sign language to viewpoint gestures in spoken language, discuss loci and locative shifts in comparison to pointing gestures in spoken language, and finally speculate about the role of gestures in general (in sign and spoken language) and the semantic contribution they can make (concerning the semantic dimension they target, i.e. whether they are at issue or not).

[1]  Ellen Fricke,et al.  Grammatik multimodal : wie Wörter und Gesten zusammenwirken , 2012 .

[2]  Philippe Schlenker,et al.  Gesture projection and cosuppositions , 2018 .

[3]  D. McNeill,et al.  Speech-gesture mismatches: Evidence for one underlying representation of linguistic and nonlinguistic information , 1998 .

[4]  S. Goldin-Meadow,et al.  The influence of communication mode on written language processing and beyond , 2015, Behavioral and Brain Sciences.

[5]  Kashmiri Stec Meaningful shifts: A review of viewpoint markers in co-speech gesture and sign language , 2012 .

[6]  A. Kendon Gesture: Visible Action as Utterance , 2004 .

[7]  P. Schlenker A Plea For Monsters , 2003 .

[8]  Philippe Schlenker Iconic enrichments: Signs vs. gestures , 2017, Behavioral and Brain Sciences.

[9]  Pierre-Yves Oudeyer,et al.  Autonomous development and learning in artificial intelligence and robotics: Scaling up deep learning to human-like learning , 2017, Behavioral and Brain Sciences.

[10]  C. Creider Hand and Mind: What Gestures Reveal about Thought , 1994 .

[11]  Fey Parrill Dual viewpoint gestures , 2009 .

[12]  Fey Parrill,et al.  Viewpoint in speech–gesture integration: Linguistic structure, discourse structure, and event structure , 2010 .

[13]  Christopher Potts The logic of conventional implicatures , 2004 .

[14]  Josep Quer,et al.  Context Shift and Indexical Variables in Sign Languages , 2005 .

[15]  Philippe Schlenker,et al.  Context of Thought and Context of Utterance: A Note on Free Indirect Discourse and the Historical Present , 2004 .

[16]  Rick Nouwen,et al.  Complement Anaphora and Interpretation , 2003, J. Semant..

[17]  Christopher Potts Conventional implicature and expressive content , 2008 .

[18]  Pranav Anand,et al.  Shifty Operators in Changing Contexts , 2004 .

[19]  Karin Rothschild Unspeakable Sentences Narration And Representation In The Language Of Fiction , 2016 .

[20]  Jürgen Streeck,et al.  Grammars, Words, and Embodied Meanings: On the Uses and Evolution of So and Like , 2002 .

[21]  Wendy Sandler,et al.  Sign Language and Linguistic Universals: Entering the lexicon: lexicalization, backformation, and cross-modal borrowing , 2006 .