Striking a c[h]ord: vocal interaction in assistive technologies, games, and more

Vocal interaction research has primarily been focused on the use of systems for automatic speech recognition and synthesis. Whilst ASR has been successful in various domains, it can be impractical in some contexts of use such as in time-sensitive and continuous controls and in applications involving users with speech impairment. This workshop aims to discuss the state of the art in vocal interaction methods that go beyond word recognition by exploiting the information contained within non-verbal vocalizations (e.g. pitch, volume, or timbre). The overarching objective of this workshop is to sketch a research agenda on the topic of the emerging discipline of non-verbal vocal interaction and its implications for the design of interactive systems. The workshop will be of interest to researchers, designers, developers, and users that are interested or would benefit from use of the non-verbal interaction.

[1]  Pavel Slavík,et al.  Whistling User Interface (U3I) , 2004, User Interfaces for All.

[2]  Perttu Hämäläinen,et al.  MUSICAL COMPUTER GAMES PLAYED BY SINGING , 2004 .

[3]  Masataka Goto,et al.  Speech Interface Exploiting Intentionally-Controlled Nonverbal Speech Information , 2005 .

[4]  Sidney S. Fels,et al.  Echology: an interactive spatial sound and video artwork , 2005, MULTIMEDIA '05.

[5]  Masataka Goto,et al.  Voice Drummer : A Music Notation Interface of Drum Sounds Using Voice Percussion Input , 2005 .

[6]  Graham F. Welch,et al.  Real-time Visual Feedback in the Development of Vocal Pitch Accuracy in Singing , 1989 .

[7]  John J. Leggett,et al.  Interaction styles and input/output devices , 1993, Behav. Inf. Technol..

[8]  Takeo Igarashi,et al.  Voice as sound: using non-verbal voice input for interactive control , 2001, UIST '01.

[9]  Masataka Goto,et al.  Speech Completion: New Speech Interface with On-demand Completion Assistance , 2001 .

[10]  Roman Vilimek,et al.  Effects of Speech and Non-Speech Sounds on Short-Term Memory and Possible Implications for In-Vehicle Use Research paper for the ICAD05 workshop "Combining Speech and Sound in the User Interface" , 2005 .

[11]  Richard Wright,et al.  The Vocal Joystick , 2006, 2006 IEEE International Conference on Acoustics Speech and Signal Processing Proceedings.

[12]  Shumin Zhai,et al.  Gaze and Speech in Attentive User Interfaces , 2000, ICMI.

[13]  Sri Kurniawan,et al.  Non-speech Operated Emulation of Keyboard , 2006 .