Vowel devoicing and the perception of spoken Japanese words.

Three experiments, in which Japanese listeners detected Japanese words embedded in nonsense sequences, examined the perceptual consequences of vowel devoicing in that language. Since vowelless sequences disrupt speech segmentation [Norris et al. (1997). Cognit. Psychol. 34, 191-243], devoicing is potentially problematic for perception. Words in initial position in nonsense sequences were detected more easily when followed by a sequence containing a vowel than by a vowelless segment (with or without further context), and vowelless segments that were potential devoicing environments were no easier than those not allowing devoicing. Thus asa, "morning," was easier in asau or asazu than in all of asap, asapdo, asaf, or asafte, despite the fact that the /f/ in the latter two is a possible realization of fu, with devoiced [u]. Japanese listeners thus do not treat devoicing contexts as if they always contain vowels. Words in final position in nonsense sequences, however, produced a different pattern: here, preceding vowelless contexts allowing devoicing impeded word detection less strongly (so, sake was detected less accurately, but not less rapidly, in nyaksake-possibly arising from nyakusake-than in nyagusake). This is consistent with listeners treating consonant sequences as potential realizations of parts of existing lexical candidates wherever possible.

[1]  Anne Cutler,et al.  Use of complex phonological patterns in speech processing: evidence from Korean , 2005, Journal of Linguistics.

[2]  J. Mehler,et al.  Epenthetic vowels in Japanese: A perceptual illusion? , 1999, Journal of Experimental Psychology: Human Perception and Performance.

[3]  V. Mann,et al.  Influence of preceding fricative on stop consonant perception. , 1981, The Journal of the Acoustical Society of America.

[4]  A. Cutler,et al.  Processing resyllabified words in French , 2003 .

[5]  A. Cutler,et al.  Pitch accent in spoken-word recognition in Japanese. , 1999, The Journal of the Acoustical Society of America.

[6]  D Norris,et al.  Merging information in speech recognition: Feedback is never necessary , 2000, Behavioral and Brain Sciences.

[7]  Mafuyu Kitahara Pitch Accent and Vowel Devoicing in Tokyo Japanese , 1998 .

[8]  Isabelle Racine,et al.  Influence de l'effacement du schwa sur la reconnaissance des mots en parole continue , 2000 .

[9]  J. McQueen Segmentation of Continuous Speech Using Phonotactics , 1998 .

[10]  A. Cutler,et al.  Rhythmic Cues and Possible-Word Constraints in Japanese Speech Segmentation ☆ , 2001 .

[11]  Kikuo Maekawa Production and perception of the accent in the consecutively devoiced syllables in tokyo Japanese , 1990, ICSLP.

[12]  Anne Cutler,et al.  Spotting (different types of) words in (different types of) context , 1998, ICSLP.

[13]  Anne Cutler,et al.  Universality Versus Language-Specificity in Listening to Running Speech , 2002, Psychological science.

[14]  A. V. Lugt The use of sequential probabilities in the segmentation of speech , 2001 .

[15]  Ulrich H. Frauenfelder,et al.  The Role of the Syllable in Lexical Segmentation in French: Word-Spotting Data , 2002, Brain and Language.

[16]  Timothy J. Vance,et al.  An introduction to Japanese phonology , 1987 .

[17]  N. Warner,et al.  Processing missing vowels: Allophonic processing in Japanese , 2009 .

[18]  Hideaki Kikuchi,et al.  CORPUS-BASED ANALYSIS OF VOWEL DEVOICING IN SPONTANEOUS JAPANESE ─AN INTERIM REPORT─ , 2002 .

[19]  A. Liberman,et al.  The role of consonant-vowel transitions in the perception of the stop and nasal consonants. , 1954 .

[20]  K. Maekawa CORPUS OF SPONTANEOUS JAPANESE : ITS DESIGN AND EVALUATION , 2003 .

[21]  A. Weber,et al.  First-language phonotactics in second-language listening. , 2006, The Journal of the Acoustical Society of America.

[22]  T. Sekiguchi,et al.  The Use of Lexical Prosody for Lexical Access of the Japanese Language , 1999 .

[23]  Anne Cutler,et al.  Language-universal constraints on speech segmentation , 2001 .

[24]  A. Cutler,et al.  Vowel harmony and speech segmentation in Finnish , 1997 .

[25]  Anne Cutler,et al.  The role of strong syllables in segmentation for lexical access , 1988 .

[26]  Cecile T. L. Kuijpers,et al.  Facilitatory Effects of Vowel Epenthesis on Word Processing in Dutch , 1999 .

[27]  Louis C. W. Pols,et al.  The influence of local context on the identification of vowels and consonants , 1995, EUROSPEECH.

[28]  D. Norris,et al.  The Possible-Word Constraint in the Segmentation of Continuous Speech , 1997, Cognitive Psychology.

[29]  James M. McQueen,et al.  Eight questions about spoken-word recognition , 2007 .

[30]  Michael C. W. Yip,et al.  Possible-Word Constraints in Cantonese Speech Segmentation , 2004, Journal of psycholinguistic research.

[31]  M. Kondo Syllable structure and its acoustic effects on vowels in devoicing environments , 2005 .

[32]  Anne Cutler,et al.  The lexical statistics of competitor activation in spoken-word recognition , 2002 .