Improving Sentence Completion in Dialogues with Multi-Modal Features

With the aim of investigating how humans understand each other through language and gestures, this paper focuses on how people understand incomplete sentences. We trained a system based on interrupted but resumed sentences, in order to find plausible completions for incomplete sentences. Our promising results are based on multi-modal features.

[1]  Laurent Romary,et al.  Referring to objects with spoken and haptic modalities , 2002, Proceedings. Fourth IEEE International Conference on Multimodal Interfaces.

[2]  Barbara Di Eugenio,et al.  Co-reference via Pointing and Haptics in Multi-Modal Dialogues , 2012, HLT-NAACL.

[3]  Fan Yang,et al.  An Investigation of Interruptions and Resumptions in Multi-Tasking Dialogues , 2011, Computational Linguistics.

[4]  Tobias Scheffer,et al.  Sentence Completion , 1921, SIGIR '04.

[5]  Barbara Di Eugenio,et al.  Improving Pronominal and Deictic Co-Reference Resolution with Multi-Modal Features , 2011, SIGDIAL Conference.

[6]  K. Krapp The Gale encyclopedia of nursing & allied health , 2002 .

[7]  Michael Kipp,et al.  ANVIL - a generic annotation tool for multimodal dialogue , 2001, INTERSPEECH.

[8]  Dan Klein,et al.  Accurate Unlexicalized Parsing , 2003, ACL.

[9]  David DeVault,et al.  Incremental interpretation and prediction of utterance meaning for interactive dialogue , 2011, Dialogue Discourse.

[10]  Barbara Di Eugenio,et al.  Towards Effective Communication with Robotic Assistants for the Elderly: Integrating Speech, Vision and Haptics , 2010, AAAI Fall Symposium: Dialog with Robots.

[11]  Alois Knoll,et al.  The roles of haptic-ostensive referring expressions in cooperative, task-based human-robot dialogue , 2008, 2008 3rd ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[12]  Ian H. Witten,et al.  The WEKA data mining software: an update , 2009, SKDD.

[13]  Galina B. Bolden Multiple modalities in collaborative turn sequences , 2003 .