Comparison of Human-Human and Human-Robot Turn-Taking Behaviour in Multiparty Situated Interaction
暂无分享,去创建一个
Gabriel Skantze | Joakim Gustafson | Martin Johansson | Joakim Gustafson | Gabriel Skantze | Martin Johansson
[1] Gabriel Skantze,et al. A Data-driven Model for Timing Feedback in a Map Task Dialogue System , 2013, SIGDIAL Conference.
[2] Michael Argyle,et al. The central Europe experiment: Looking at persons and looking at objects , 1976 .
[3] Takayuki Kanda,et al. Conversational gaze mechanisms for humanlike robots , 2012, TIIS.
[4] Jean-Marc Odobez,et al. Recognizing Visual Focus of Attention From Head Pose in Natural Meetings , 2009, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).
[5] J. Burgoon,et al. Interactivity in human–computer interaction: a study of credibility, understanding, and influence , 2000 .
[6] A. Kendon. Some functions of gaze-direction in social interaction. , 1967, Acta psychologica.
[7] A. Ichikawa,et al. An Analysis of Turn-Taking and Backchannels Based on Prosodic and Syntactic Features in Japanese Map Task Dialogs , 1998, Language and speech.
[8] David R. Traum,et al. Embodied agents for multi-party dialogue in immersive virtual worlds , 2002, AAMAS '02.
[9] Jie Zhu,et al. Head orientation and gaze direction in meetings , 2002, CHI Extended Abstracts.
[10] Paul D. Allopenna,et al. Tracking the Time Course of Spoken Word Recognition Using Eye Movements: Evidence for Continuous Mapping Models , 1998 .
[11] Gabriel Skantze,et al. The furhat Back-Projected humanoid Head-Lip Reading, gaze and Multi-Party Interaction , 2013, Int. J. Humanoid Robotics.
[12] Eric Horvitz,et al. Decisions about turns in multiparty conversation: from perception to action , 2011, ICMI '11.
[13] James F. Allen,et al. Draft of DAMSL Dialog Act Markup in Several Layers , 2007 .
[14] Gabriel Skantze,et al. Turn-taking, feedback and joint attention in situated human-robot interaction , 2014, Speech Commun..
[15] Tanja Schultz,et al. Identifying the addressee in human-human-robot interactions based on head pose and speech , 2004, ICMI '04.
[16] Gabriel Skantze,et al. IrisTK: a statechart-based toolkit for multi-party face-to-face interaction , 2012, ICMI '12.
[17] Louis-Philippe Morency,et al. A probabilistic multimodal approach for predicting listener backchannels , 2009, Autonomous Agents and Multi-Agent Systems.
[18] H. H. Clark,et al. Speaking while monitoring addressees for understanding , 2004 .
[19] S. Duncan,et al. Some Signals and Rules for Taking Speaking Turns in Conversations , 1972 .
[20] Petra Wagner,et al. Gaze Patterns in Turn-Taking , 2012, INTERSPEECH.
[21] Gabriel Skantze,et al. Head Pose Patterns in Multiparty Human-Robot Team-Building Interactions , 2013, ICSR.
[22] Anton Nijholt,et al. Eye gaze patterns in conversations: there is more to conversational agents than meets the eyes , 2001, CHI.