Gestural mind markers in ECAs

We aim at creating expressive Embodied Conversational Agents (ECAs) able to communicate multimodally with a user or with other ECAs. In this paper we focus on the Gestural Mind Markers, that is, those gestures that convey information on the Speaker's Mind; we present the ANVIL-SCORE, a tool to analyze and classify multimodal data that is a semantically augmented version of Kipp's ANVIL [2].

[1]  I. Poggi MIND MARKERS , 2003 .

[2]  Joseph Bates,et al.  Personality-rich believable agents that use language , 1997, AGENTS '97.

[3]  Catherine Pelachaud,et al.  Eye Communication in a Conversational 3D Synthetic Agent , 2000, AI Commun..

[4]  Susan J. Boyce,et al.  Spoken Natural Language Dialogue Systems: User Interface Issues for the Future , 1999 .

[5]  Isabella Poggi,et al.  From a Typology of Gestures to a Procedure for Gesture Production , 2001, Gesture Workshop.

[6]  Mark Steedman,et al.  Animated conversation: rule-based generation of facial expression, gesture & spoken intonation for multiple conversational agents , 1994, SIGGRAPH.

[7]  C.D. Martin,et al.  The Media Equation: How People Treat Computers, Television and New Media Like Real People and Places [Book Review] , 1997, IEEE Spectrum.

[8]  Ipke Wachsmuth,et al.  Towards a cognitively motivated processing of turn-taking signals for the embodied conversational agent Max , 2004 .

[9]  Kristinn R. Thórisson,et al.  The Power of a Nod and a Glance: Envelope Vs. Emotional Feedback in Animated Conversational Agents , 1999, Appl. Artif. Intell..

[10]  Yukiko I. Nakano,et al.  Non-Verbal Cues for Discourse Structure , 2022 .

[11]  P. Ekman,et al.  The Repertoire of Nonverbal Behavior: Categories, Origins, Usage, and Coding , 1969 .

[12]  Mari Ostendorf,et al.  Error-correction detection and response generation in a spoken dialogue system , 2005, Speech Commun..

[13]  Catherine Pelachaud,et al.  Gestural Mind Markers in ECAs , 2003, Gesture Workshop.

[14]  Clifford Nass,et al.  The media equation - how people treat computers, television, and new media like real people and places , 1996 .

[15]  Richard Catrambone,et al.  ECA as User Interface Paradigm , 2004, From Brows to Trust.

[16]  James C. Lester,et al.  The persona effect: affective impact of animated pedagogical agents , 1997, CHI.

[17]  Stacy Marsella,et al.  Interactive pedagogical drama , 2000, AGENTS '00.

[18]  Scott P. Robertson,et al.  Proceedings of the SIGCHI Conference on Human Factors in Computing Systems , 1991 .

[19]  R. Conte,et al.  Cognitive and social action , 1995 .

[20]  J. Oberlander,et al.  Using Facial Feedback to Enhance Turn-Taking in a Multimodal Dialogue System , 2005 .

[21]  A. Kendon Gestures as illocutionary and discourse structure markers in Southern Italian conversation , 1995 .

[22]  K. Chang,et al.  Embodiment in conversational interfaces: Rea , 1999, CHI '99.

[23]  ISABELLA POGGI Isabella Poggi towards the Alphabet and the Lexicon of Gesture, Gaze and Touch , 2001 .

[24]  Paul Taylor,et al.  The architecture of the Festival speech synthesis system , 1998, SSW.

[25]  P. Ekman Facial expression and emotion. , 1993, The American psychologist.

[26]  J. Montepare,et al.  The identification of emotions from gait information , 1987 .

[27]  Thomas Rist,et al.  WebPersona: a lifelike presentation agent for the World-Wide Web , 1998, Knowl. Based Syst..

[28]  Ken Perlin,et al.  Improv: a system for scripting interactive actors in virtual worlds , 1996, SIGGRAPH.

[29]  Maurizio Mancini,et al.  Formational parameters and adaptive prototype instantiation for MPEG-4 compliant gesture synthesis , 2002, Proceedings of Computer Animation 2002 (CA 2002).

[30]  James C. Lester,et al.  Animated Pedagogical Agents: Face-to-Face Interaction in Interactive Learning Environments , 2000 .

[31]  Carolyn G. Fidelman,et al.  The semiotics of French gestures , 1990 .

[32]  T. Shallice What ghost in the machine? , 1992, Nature.

[33]  Norman I. Badler,et al.  The EMOTE model for effort and shape , 2000, SIGGRAPH.

[34]  M. Studdert-Kennedy Hand and Mind: What Gestures Reveal About Thought. , 1994 .

[35]  Matthew Stone,et al.  Living Hand to Mouth: Psychological Theories about Speech and Gesture in Interactive Dialogue Systems , 1999 .

[36]  Richard Kennaway,et al.  Synthetic Animation of Deaf Signing Gestures , 2001, Gesture Workshop.

[37]  Sharon L. Oviatt,et al.  Predicting hyperarticulate speech during human-computer error resolution , 1998, Speech Commun..

[38]  Heike Schaumburg,et al.  Computers as Tools or as Social Actors? - The Users' Perspective on Anthropomorphic Agents , 2001, Int. J. Cooperative Inf. Syst..

[39]  Sharon Oviatt,et al.  Designing and evaluating conversational interfaces with animated characters , 2001 .

[40]  Sylvie Gibet,et al.  High level specification and control of communication gestures: the GESSYCA system , 1999, Proceedings Computer Animation 1999.

[41]  Nicole Chovil Discourse‐oriented facial displays in conversation , 1991 .

[42]  Catherine Pelachaud,et al.  Embodied contextual agent in information delivering application , 2002, AAMAS '02.

[43]  A. Kendon Conducting Interaction: Patterns of Behavior in Focused Encounters , 1990 .

[44]  Richard Catrambone,et al.  Anthropomorphic Agents as a User Interface Paradigm: Experimental Findings and a Framework for Research , 2019, Proceedings of the Twenty-Fourth Annual Conference of the Cognitive Science Society.

[45]  Michael Kipp,et al.  From Human Gesture to Synthetic Action , 2001 .

[46]  Isabella Poggi The Lexicon and the Alphabet of Gesture, Gaze, and Touch , 2001, IVA.

[47]  James M. Rehg,et al.  Computer Vision for Human–Machine Interaction: Visual Sensing of Humans for Active Public Interfaces , 1998 .

[48]  Sharon L. Oviatt,et al.  Error resolution during multimodal human-computer interaction , 1996, Proceeding of Fourth International Conference on Spoken Language Processing. ICSLP '96.

[49]  Rubén San-Segundo-Hernández,et al.  Designing Confirmation Mechanisms and Error Recover Techniques in a Railway Information System for Spanish , 2001, SIGDIAL Workshop.

[50]  Sharon L. Oviatt Interface techniques for minimizing disfluent input to spoken language systems , 1994, CHI '94.

[51]  Hao Yan,et al.  More than just a pretty face: affordances of embodiment , 2000, IUI '00.