Multimodal interaction strategies in a multi-device environment around natural speech

In this paper we present an intelligent user interface which combines a speech-based interface with several other input modalities. The integration of multiple devices into a working environment should provide greater flexibility to the daily routine of medical experts for example. To this end, we will introduce a medical cyber-physical system that demonstrates the use of a bidirectional connection between a speech-based interface and a head-mounted see-through display. We will show examples of how we can exploit multiple input modalities and thus increase the usability of a speech-based interaction system.

[1]  David L Weiss,et al.  Structured reporting: patient care enhancement or productivity nightmare? , 2008, Radiology.

[2]  Norbert Pfleger,et al.  Development of advanced dialog systems with PATE , 2006, INTERSPEECH.

[3]  Daniel Sonntag,et al.  Digital pen in mammography patient forms , 2011, ICMI '11.

[4]  Daniel Sonntag,et al.  A Multimodal Multi-device Discourse and Dialogue Infrastructure for Collaborative Decision-Making in Medicine , 2012, Natural Interaction with Robots, Knowbots and Smartphones, Putting Spoken Dialog Systems into Practice.