Modeling Gaze Behavior for Virtual Demonstrators

Achieving autonomous virtual humans with coherent and natural motions is key for being effective in many educational, training and therapeutic applications. Among several aspects to be considered, the gaze behavior is an important non-verbal communication channel that plays a vital role in the effectiveness of the obtained animations. This paper focuses on analyzing gaze behavior in demonstrative tasks involving arbitrary locations for target objects and listeners. Our analysis is based on full-body motions captured from human participants performing real demonstrative tasks in varied situations. We address temporal information and coordination with targets and observers at varied positions.

[1]  A. Kendon Some functions of gaze-direction in two-person conversation , 1977 .

[2]  Chen Yu,et al.  Real-time adaptive behaviors in multimodal human-avatar interactions , 2010, ICMI-MLMI '10.

[3]  Marcelo Kallmann,et al.  Motion Parameterization with Inverse Blending , 2010, MIG.

[4]  Katsu Yamane,et al.  Synthesizing animations of human manipulation tasks , 2004, ACM Trans. Graph..

[5]  Ephraim P. Glinert,et al.  Multimodal Interaction , 1996, IEEE Multim..

[6]  A. Kendon Conducting Interaction: Patterns of Behavior in Focused Encounters , 1990 .

[7]  Philippe Lefèvre,et al.  Experimental study and modeling of vestibulo-ocular reflex modulation during large shifts of gaze in humans , 2004, Experimental Brain Research.

[8]  D. Pélisson,et al.  Vestibuloocular reflex inhibition and gaze saccade control characteristics during eye-head orientation in humans. , 1988, Journal of neurophysiology.

[9]  A. Kendon Gesture: Visible Action as Utterance , 2004 .

[10]  Marc Cavazza,et al.  Discovering eye gaze behavior during human-agent conversation in an interactive storytelling application , 2010, ICMI-MLMI '10.

[11]  Brent Lance,et al.  Emotionally Expressive Head and Body Movement During Gaze Shifts , 2007, IVA.

[12]  Bilge Mutlu,et al.  A Storytelling Robot: Modeling and Evaluation of Human-like Gaze Behavior , 2006, 2006 6th IEEE-RAS International Conference on Humanoid Robots.

[13]  H. H. Clark,et al.  Speaking while monitoring addressees for understanding , 2004 .

[14]  Chiho Sunakawa ADAM KENDON , Gesture: Visible action as utterance , 2007 .

[15]  Carlo Camporesi,et al.  Interactive Motion Modeling and Parameterization by Direct Demonstration , 2010, IVA.

[16]  Brent Lance,et al.  Real-time expressive gaze animation for virtual humans , 2009, AAMAS.

[17]  John P. Lewis,et al.  Automated eye motion using texture synthesis , 2005, IEEE Computer Graphics and Applications.

[18]  D Guitton,et al.  Central Organization and Modeling of Eye‐Head Coordination during Orienting Gaze Shifts a , 1992, Annals of the New York Academy of Sciences.

[19]  Teenie Matlock,et al.  Gesture variants and cognitive constraints for interactive virtual reality training systems , 2011, IUI '11.

[20]  Andrew T. Duchowski,et al.  Hybrid image-/model-based gaze-contingent rendering , 2007, TAP.

[21]  Kathleen E Cullen,et al.  The brain stem saccadic burst generator encodes gaze in three-dimensional space. , 2008, Journal of neurophysiology.

[22]  Kathleen E Cullen,et al.  Time course of vestibuloocular reflex suppression during gaze shifts. , 2004, Journal of neurophysiology.