Gesture-enhanced information retrieval and presentation in a distributed learning environment
暂无分享,去创建一个
We describe a novel multimodal gesture-enhanced interface to provide immersive as well as non-immersive, non-intrusive and natural interaction between the user and the information system in a distributed learning environment. Different types of gestures are dynamically transformed into clues for spatial/temporal queries for multimedia information sources and databases. The transformation can depend upon the location, the time, the task, and the user profile. Using various transcoding schemes, the retrieved information can be presented to the user in a multi-resolution, multi-dimensional, and multimodal manner.
[1] Geoffrey Z. Liu. Symbolic projection for image information retrieval and spatial reasoning , 1997 .
[2] Tsuhan Chen,et al. Audio-visual integration in multimodal communication , 1998, Proc. IEEE.
[3] Tsuhan Chen,et al. Audio-to-visual conversion for multimedia communication , 1998, IEEE Trans. Ind. Electron..
[4] Erland Jungert,et al. A Spatial/Temporal Query Language for Multiple Data Sources in a Heterogeneous Information System Environment , 1998, Int. J. Cooperative Inf. Syst..