Djinn: Interaction Framework for Home Environment Using Speech and Vision
暂无分享,去创建一个
In this paper we describe an interaction framework that uses speech recognition and computer vision to model new generation of interfaces in the residential environment. We outline the blueprints of the architecture and describe the main building blocks. We show a concrete prototype platform where this novel architecture has been deployed and will be tested at the user field trials. EC co-funds this work as part of HomeTalk IST-2001-33507 project.
[1] Jan Kleindienst,et al. Loosely-coupled approach towards multi-modal browsing , 2003, Universal Access in the Information Society.
[2] Jan Kleindienst,et al. CATCH-2004 multi-modal browser: overview description with usability analysis , 2002, Proceedings. Fourth IEEE International Conference on Multimodal Interfaces.
[3] Jan Kleindienst,et al. Aspects of design and implementation of a multi-channel and multi-modal information system , 2001, Proceedings IEEE International Conference on Software Maintenance. ICSM 2001.