Computer- and human-directed speech before and after correction

Speech register research shows that humans are adept at finetuning components of their speech to accommodate the needs audience of the audience, suggesting that they have a model of human communication needs. However, when that audience is a computer rather than another human, such a model may be invalid. Here we examine humans’ speech to other humans or an auditory-visual avatar before and after the computer or the human listener makes a listening “error”. Speech is found to be hyperarticulated in Computercompared with HumanDirected speech, and also in speech after correction. Results are discussed in terms of human-computer interaction and ASR systems.

[1]  S Oviatt,et al.  Modeling global and focal hyperarticulation during human-computer error resolution. , 1998, The Journal of the Acoustical Society of America.

[2]  Chris Baber,et al.  Modelling Error Recovery and Repair in Automatic Speech Recognition , 1993, Int. J. Man Mach. Stud..

[3]  Rachel Coulston,et al.  Predicting children’s hyperarticulate speech during human‐computer error resolution , 2003 .

[4]  Takaaki Kuratate,et al.  From talking to thinking heads: report 2008 , 2008, AVSP.

[5]  Sharon L. Oviatt,et al.  Designing the User Interface for Multimodal Speech and Pen-Based Gesture Applications: State-of-the-Art Systems and Future Research Directions , 2000, Hum. Comput. Interact..

[6]  Christine Kitamura,et al.  Infant‐directed speech to infants with a simulated hearing loss. , 2009 .

[7]  P. Kuhl,et al.  Acoustic determinants of infant preference for motherese speech , 1987 .

[8]  P. Kuhl,et al.  Cross-language analysis of phonetic units in language addressed to infants. , 1997, Science.

[9]  Sharon L. Oviatt,et al.  Predicting hyperarticulate speech during human-computer error resolution , 1998, Speech Commun..

[10]  Chris Baber,et al.  Modelling the effects of constraint upon speech-based human-computer interaction , 1999, Int. J. Hum. Comput. Stud..

[11]  P. Kuhl,et al.  Maternal speech to infants in a tonal language: Support for universal prosodic features in motherese. , 1988 .

[12]  A. Fernald,et al.  Expanded Intonation Contours in Mothers' Speech to Newborns. , 1984 .

[13]  D. Burnham,et al.  What's New, Pussycat? On Talking to Babies and Animals , 2002, Science.

[14]  Amanda Stent,et al.  Adapting speaking after evidence of misrecognition: Local and global hyperarticulation , 2008, Speech Commun..

[15]  D. Burnham,et al.  Pitch and Communicative Intent in Mother's Speech: Adjustments for Age and Sex in the First Year , 2003 .

[16]  C. Kitamura,et al.  Maternal interactions with a hearing and hearing-impaired twin: similarities and differences in speech input, interaction quality, and word production. , 2010, Journal of speech, language, and hearing research : JSLHR.

[17]  Chris Baber,et al.  Designing habitable dialogues for speech-based interaction with computers , 2001, Int. J. Hum. Comput. Stud..

[18]  A. Fernald,et al.  Intonation and communicative intent in mothers' speech to infants: is the melody the message? , 1989, Child development.