A tool for the integrated specification, design and implementation of truly multimodal speech-enabled HMIs
暂无分享,去创建一个
It is commonly accepted that the use of multiple modalities for human machine interfaces (HMIs) potentially facilitates the handling of complex systems for users. Especially drivers of cars need particularly designed interfaces that do not distract them from their main task - from driving. Therefore, speech dialog systems have been frequently added to the graphical/haptical interfaces because speech control potentially allows drivers to keep their hands on the wheel and their eyes on the street. While speech recognition technology itself was optimised over the past years for the usage in cars so that high recognition rates become possible even at e. g. high speeds, the overall HMI - i. e. the combination with the graphics/haptics part of the interfaces often lacks consistency and thus ease-of-use. In this paper we present a tool that allows the integrated development of graphical/haptical and speech dialog and that ensures consistent, overall HMIs by using a central, XML-based HMI model.