Augmenting Online Conversation through Automated Discourse Tagging

In face-to-face communication, the communicative function of the spoken text is clarified through supporting verbal and nonverbal discourse devices. In computer-mediated communication, the mediating channel may not be able to carry all those devices. To ensure the original intent gets communicated effectively, discourse tags can be embedded in a message to encode the communicative function of text given the context in which it was produced. The receiving client can then generate its own supporting discourse devices from the received tags, taking into account the receiver's context. Spark is a synchronous CMC architecture based on this concept of a message transformation, where an outgoing text message gets automatically annotated with discourse function markup that is then rendered as nonverbal discourse cues by a graphical avatar agent on the receiver side. A user study performed on a derived application for collaborative route planning demonstrates the strength of the approach.

[1]  E. Goffman Behavior in public places : notes on the social organization of gatherings , 1964 .

[2]  S. Duncan,et al.  On the structure of speaker–auditor interaction during speaking turns , 1974, Language in Society.

[3]  M. Cary The Role of Gaze in the Initiation of Conversation , 1978 .

[4]  Ellen F. Prince,et al.  Toward a taxonomy of given-new information , 1981 .

[5]  C. Goodwin Conversational Organization: Interaction Between Speakers and Hearers , 1981 .

[6]  James F. Allen Natural language understanding , 1987, Bejnamin/Cummings series in computer science.

[7]  L. Polanyi A formal model of the structure of discourse , 1988 .

[8]  A. Kendon Conducting Interaction: Patterns of Behavior in Focused Encounters , 1990 .

[9]  Susan R. Fussell,et al.  Constructing shared communicative environments , 1991, Perspectives on socially shared cognition.

[10]  Herbert H. Clark,et al.  Grounding in communication , 1991, Perspectives on socially shared cognition.

[11]  Nicole Chovil Discourse‐oriented facial displays in conversation , 1991 .

[12]  Hiroshi Ishii,et al.  ClearBoard: A Novel Shared Drawing Medium that Supports Gaze Awareness in Remote Collaboration (Special Issue on Next Generation Visual Telecommunication and Broadcasting) , 1993 .

[13]  Alan J. Dix,et al.  Text-Based On-Line Conferencing: A Conceptual and Empirical Analysis Using a Minimal Prototype , 1993, Hum. Comput. Interact..

[14]  D. McNeill Hand and Mind , 1995 .

[15]  J. Bavelas,et al.  Gestures Specialized for Dialogue , 1995 .

[16]  M. Lynn Hawaii International Conference on System Sciences , 1996 .

[17]  Lynn Cherny,et al.  The mud register : conversational modes of action in a text-based virtual reality , 1996 .

[18]  C. Werry Linguistic and interactional features of Internet relay chat , 1996 .

[19]  Steve Whittaker,et al.  The role of vision in face-to-face and mediated communication. , 1997 .

[20]  Justine Cassell,et al.  Semantic and Discourse Information for Text-to-Speech Intonation , 1997, Workshop On Concept To Speech Generation Systems.

[21]  Angela Cora Garcia,et al.  The Interactional Organization of Computer Mediated Communication in the College Classroom , 1998 .

[22]  J. Cassell,et al.  Modeling Gaze Behavior as a Function of Discourse Structure , 1998 .

[23]  Judith S. Donath,et al.  Chat circles , 1999, CHI '99.

[24]  Steven M. Drucker,et al.  Alternative interfaces for chat , 1999, UIST '99.

[25]  Lynn Cherny Conversation and Community: Chat in a Virtual World , 1999 .

[26]  Roel Vertegaal,et al.  The GAZE groupware system: mediating joint attention in multiparty communication and collaboration , 1999, CHI '99.

[27]  Marc Smith,et al.  Conversation trees and threaded chats , 2000, CSCW '00.

[28]  Wendy A. Kellogg,et al.  Social translucence: an approach to designing systems that support social processes , 2000, TCHI.

[29]  Michael J. Taylor,et al.  Gaze communication using semantically consistent spaces , 2000, CHI.

[30]  Lynette Hirschman,et al.  Evaluating Multi-party Multi-modal Systems , 2000, LREC.

[31]  Mel Slater,et al.  The impact of eye gaze on communication using humanoid avatars , 2001, CHI.

[32]  Daniel D. Suthers,et al.  Collaborative representations: supporting face to face and online knowledge-building discourse , 2001, Proceedings of the 34th Annual Hawaii International Conference on System Sciences.

[33]  Hao Yan,et al.  More than just a pretty face: conversational protocols and the affordances of embodiment , 2001, Knowl. Based Syst..

[34]  Anoop Gupta,et al.  Graphical Enhancements for Voice Only Conference Calls , 2001 .

[35]  Michael Kipp,et al.  ANVIL - a generic annotation tool for multimodal dialogue , 2001, INTERSPEECH.

[36]  Roel Vertegaal,et al.  Explaining effects of eye gaze on mediated group conversations:: amount or synchronization? , 2002, CSCW '02.

[37]  Judith S. Donath,et al.  A semantic approach to visualizing online conversations , 2002, CACM.

[38]  T. Millon,et al.  Personality and social psychology , 2003 .

[39]  Justine Cassell,et al.  BEAT: the Behavior Expression Animation Toolkit , 2001, Life-like characters.

[40]  Justine Cassell,et al.  Fully Embodied Conversational Avatars: Making Communicative Behaviors Autonomous , 1999, Autonomous Agents and Multi-Agent Systems.