Modeling of spoken dialogue with and without visual information
暂无分享,去创建一个
Auditory information is the major factor in human communication but in practical conversations, visual information such as gesture, facial expression, and head movement clearly makes it much smoother and more natural. Most research related to analysis of spoken dialogue is based on only auditory information. We are trying to clarify how humans utilize the knowledge of spoken dialogue management by dealing with more natural communication that includes visual information. Above all, by using visual information we can deal with a listener's attitude to the speaker-this cannot be done by using only auditory information.
[1] Katsuhiko Shirai,et al. improving human interface drawing tool using speech, mouse and key-board , 1995, Proceedings 4th IEEE International Workshop on Robot and Human Communication.
[2] Katsuhiko Shirai,et al. Analysis of head movements and its role in spoken dialogue , 1996, Proceeding of Fourth International Conference on Spoken Language Processing. ICSLP '96.
[3] Masahiro Araki,et al. Cooperative Spoken Dialogue Model Using Bayesian Network and Event Hierarchy , 1995, IEICE Trans. Inf. Syst..