Speech diversity and speech interfaces: considering an inclusive future through stammering

The number of speech interfaces and services made available through them continue to grow. This has opened up interactions to people who rely on speech as a critical modality for interacting with systems. However, people with diverse speech patterns such as those who stammer are at risk of being negatively affected or excluded from speech interface interaction. In this paper, we consider what an inclusive speech interface future may look like for people who stammer. In doing so, we identify three key challenges: (1) developing effective speech recognition, (2) understanding the user experiences of people who stammer and (3) supporting speech interfaces designers through appropriate heuristics. We believe the interdisciplinary and cross-community strengths of venues like CUI are well positioned to address these challenges going forward.

[1]  Alfred Mertins,et al.  Automatic speech recognition and speech variability: A review , 2007, Speech Commun..

[2]  Qiru Zhou,et al.  Robust endpoint detection and energy normalization for real-time speech and speaker recognition , 2002, IEEE Trans. Speech Audio Process..

[3]  Astrid Weber,et al.  What can I say?: addressing user experience challenges of a mobile voice user interface for accessibility , 2016, MobileHCI.

[4]  Martin Porcheron,et al.  Progressivity for voice interface design , 2019, CUI.

[5]  P. Enderby,et al.  Stammering and therapy views of people who stammer. , 2002, Journal of fluency disorders.

[6]  Changhoon Oh,et al.  TurtleTalk: An Educational Programming Game for Children with Voice User Interface , 2019, CHI Extended Abstracts.

[7]  Sethuraman Panchanathan,et al.  Whistle-blowing ASRs: Evaluating the Need for More Inclusive Speech Recognition Systems , 2018, INTERSPEECH.

[8]  Roisin McNaney,et al.  StammerApp: Designing a Mobile Application to Support Self-Reflection and Goal Setting for People Who Stammer , 2018, CHI.

[9]  Benjamin R. Cowan,et al.  Revolution or Evolution? Speech Interaction and HCI Design Guidelines , 2019, IEEE Pervasive Computing.

[10]  Isobel Crichton-Smith Communicating in the real world: accounts from people who stammer. , 2002, Journal of fluency disorders.

[11]  Leah Findlater,et al.  "Accessibility Came by Accident": Use of Voice-Controlled Intelligent Personal Assistants by People with Disabilities , 2018, CHI.

[12]  Abigail Sellen,et al.  "Like Having a Really Bad PA": The Gulf between User Expectation and Experience of Conversational Agents , 2016, CHI.

[13]  Benjamin R. Cowan,et al.  What Makes a Good Conversation?: Challenges in Designing Truly Conversational Agents , 2019, CHI.

[14]  Benjamin R. Cowan,et al.  Voice assistants and older people: some open issues , 2019, CUI.

[15]  Benjamin R. Cowan,et al.  "What can i help you with?": infrequent users' experiences of intelligent personal assistants , 2017, MobileHCI.

[16]  Jens Edlund,et al.  The State of Speech in HCI: Trends, Themes and Challenges , 2018, Interact. Comput..

[17]  Sarah Sharples,et al.  Voice Interfaces in Everyday Life , 2018, CHI.

[18]  M. Hawley,et al.  Addressing the needs of speakers with longstanding dysarthria: computerized and traditional therapy compared. , 2007, International journal of language & communication disorders.

[19]  Ravi Kuber,et al.  "Siri Talks at You": An Empirical Investigation of Voice-Activated Personal Assistant (VAPA) Usage by Individuals Who Are Blind , 2018, ASSETS.

[20]  Andrew L. Kun,et al.  Human-Machine Interaction for Vehicles: Review and Outlook , 2018, Found. Trends Hum. Comput. Interact..

[21]  C. Dolea,et al.  World Health Organization , 1949, International Organization.

[22]  Roger K. Moore Is Spoken Language All-or-Nothing? Implications for Future Speech-Based Human-Machine Interaction , 2016, IWSDS.

[23]  Jessica A. Chen,et al.  Conversational agents in healthcare: a systematic review , 2018, J. Am. Medical Informatics Assoc..

[24]  Walter S. Lasecki,et al.  Accessible Voice Interfaces , 2018, CSCW Companion.