Modality and contextual salience in co-sign vs. co-speech gesture

Schlenker has done an excellent job of combining a number of different strands of recent work on sign languages in order to make a larger case that: (i) sign languages make logical form visible, and (ii) this logical visibility is made possible via iconicity. These two hypotheses are intertwined in ways that focus on the fundamental question concerning the boundary between language and gesture, both in signed and spoken languages. Schlenker is focusing on a specific kind of gesture in sign languages; his examples are selectively chosen to engage with gradient, iconic forms that might be considered co-sign gesture and that might have an obvious parallel with co-speech gesture. We can, therefore, ask how co-sign and co-speech gesture might differ from each other. Do sign languages have an advantage in its range of co-sign (vs. co-speech) gestures? In this commentary I will suggest some possible ways that there indeed might be a modality effect in gesture. I will assume from the start that messages in both signed and spoken languages pair linguistic and gestural form. Goldin-Meadow and Brentari (2017) argue that instead of comparing sign vs. speech, a more fruitful comparison would be sign+gesture vs. speech+gesture. But do the gestural elements of signed languages have the same status as those of spoken languages? Schlenker does not take a stand on this issue in this paper and confines the examples in his target article to sign languages, and I raise issues in this paper that pertain to this next step in the work. It may be the case that if we include co-speech gesture in the analysis of spoken languages, the differences between the semantic resources available in signed and spoken language would disappear, or it could be that there are still differences. A further question is whether we should limit our analyses to quintessential iconic, manual gestures, or should the scope of analysis include a broader range of forms—for example, prosody—or even some elements of the broader context. Context enrichment (Bott and Chemla 2016) is a term that can apply to all

[1]  W. Sandler Symbiotic symbolization by hand and mouth in sign language , 2009, Semiotica.

[2]  Marc Swerts,et al.  Audiovisual Correlates of Interrogativity: A Comparative Analysis of Catalan and Dutch , 2014 .

[3]  Matthew Stone,et al.  A Formal Semantic Analysis of Gesture , 2009, J. Semant..

[4]  Rachel Sutton-Spence,et al.  Analysing Sign Language Poetry , 2004 .

[5]  D. Loehr Aspects of rhythm in gesture and speech , 2007 .

[6]  S. Goldin-Meadow,et al.  The influence of communication mode on written language processing and beyond , 2015, Behavioral and Brain Sciences.

[7]  Scott K. Liddell Grammar, Gesture, and Meaning in American Sign Language , 2003 .

[8]  Emmanuel Chemla,et al.  Shared and distinct mechanisms in deriving linguistic enrichment , 2016 .

[9]  Angela Ott,et al.  The interaction of pitch accent and gesture production , 2013 .

[10]  Karen Emmorey,et al.  10 Categorical Versus Gradient Properties of Classifier Constructions in ASL , 2003 .

[11]  Ronnie B. Wilbur Sign Languages: The semantics–phonology interface , 2010 .

[12]  Arika Okrent,et al.  Modality and structure in signed and spoken languages: A modality-free notion of gesture and how it can help us with the morpheme vs. gesture question in sign language linguistics (Or at least give us some criteria to work with) , 2002 .

[13]  Susan Duncan,et al.  Gesture in Signing: A Case Study from Taiwan Sign Language * , 2005 .

[14]  Philippe Schlenker,et al.  Event representations constrain the structure of language: Sign language as a window into universally accessible linguistic biases , 2015, Proceedings of the National Academy of Sciences.

[15]  D. Brentari,et al.  Production and Comprehension of Prosodic Markers in Sign Language Imperatives , 2018, Front. Psychol..

[16]  Roland Pfau,et al.  Nonmanuals: their grammatical and prosodic roles , 2010 .

[17]  Annika Herrmann,et al.  The marking of information structure in German Sign Language , 2015 .

[18]  M. Swerts,et al.  The Effects of Visual Beats on Prosodic Prominence: Acoustic Analyses, Auditory Perception and Visual Perception. , 2007 .

[19]  J. Bargh,et al.  Social cognition and social perception. , 1987, Annual review of psychology.

[20]  Urtzi Etxeberria,et al.  The emergence of scalar meanings , 2015, Front. Psychol..

[21]  S. Nobe,et al.  Representational gestures, cognitive rhythms, and acoustic aspects of speech: A network threshold model of gesture production , 1996 .