Head Motion Generation

[1]  Panayiotis G. Georgiou,et al.  Modeling head motion entrainment for prediction of couples' behavioral characteristics , 2015, 2015 International Conference on Affective Computing and Intelligent Interaction (ACII).

[2]  Stacy Marsella,et al.  Learning a model of speaker head nods using gesture corpora , 2009, AAMAS.

[3]  Francisco J. Perales López,et al.  Influence of head orientation in perception of personality traits in virtual agents , 2011, AAMAS.

[4]  Hiroshi Ishiguro,et al.  Generation of nodding, head tilting and eye gazing for human-robot dialogue interaction , 2012, 2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[5]  K. Chang,et al.  Embodiment in conversational interfaces: Rea , 1999, CHI '99.

[6]  Stacy Marsella,et al.  Predicting Co-verbal Gestures: A Deep and Temporal Modeling Approach , 2015, IVA.

[7]  Jeffery A. Jones,et al.  Visual Prosody and Speech Intelligibility , 2004, Psychological science.

[8]  A. Murat Tekalp,et al.  Analysis of Head Gesture and Prosody Patterns for Prosody-Driven Head-Gesture Animation , 2008, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[9]  Michael Kipp,et al.  Gesture generation by imitation: from human behavior to computer character animation , 2005 .

[10]  Yang Liu,et al.  MSP-AVATAR corpus: Motion capture recordings to study the role of discourse functions in the design of intelligent virtual agents , 2015, 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG).

[11]  Geoffrey E. Hinton,et al.  Factored conditional restricted Boltzmann Machines for modeling motion style , 2009, ICML '09.

[12]  Mark Steedman,et al.  Animated conversation: rule-based generation of facial expression, gesture & spoken intonation for multiple conversational agents , 1994, SIGGRAPH.

[13]  Matthew Stone,et al.  Specifying and animating facial signals for discourse in embodied conversational agents , 2004, Comput. Animat. Virtual Worlds.

[14]  Matthew Stone,et al.  Speaking with hands: creating animated conversational characters from recordings of human performance , 2004, SIGGRAPH 2004.

[15]  Carlos Busso,et al.  Generating Human-Like Behaviors Using Joint, Speech-Driven Models for Conversational Agents , 2012, IEEE Transactions on Audio, Speech, and Language Processing.

[16]  Carlos Busso,et al.  A multimodal analysis of synchrony during dyadic interaction using a metric based on sequential pattern mining , 2016, 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[17]  Volker Strom,et al.  Visual prosody: facial movements accompanying speech , 2002, Proceedings of Fifth IEEE International Conference on Automatic Face Gesture Recognition.

[18]  Geoffrey E. Hinton,et al.  Modeling Human Motion Using Binary Latent Variables , 2006, NIPS.

[19]  Takaaki Kuratate,et al.  Audio-visual synthesis of talking faces from speech production correlates. , 1999 .

[20]  C. Pelachaud,et al.  GRETA. A BELIEVABLE EMBODIED CONVERSATIONAL AGENT , 2005 .

[21]  Mary Ellen Foster,et al.  Comparing Rule-Based and Data-Driven Selection of Facial Displays , 2007 .

[22]  Steve DiPaola,et al.  Facial actions as visual cues for personality , 2006, Comput. Animat. Virtual Worlds.

[23]  Carlos Busso,et al.  Exploring Cross-Modality Affective Reactions for Audiovisual Emotion Recognition , 2013, IEEE Transactions on Affective Computing.

[24]  James C. Lester,et al.  Lifelike Pedagogical Agents for Mixed-initiative Problem Solving in Constructivist Learning Environments , 2004, User Modeling and User-Adapted Interaction.

[25]  Carlos Busso,et al.  IEMOCAP: interactive emotional dyadic motion capture database , 2008, Lang. Resour. Evaluation.

[26]  Carlos Busso,et al.  Retrieving Target Gestures Toward Speech Driven Animation with Meaningful Behaviors , 2015, ICMI.

[27]  Hiroshi Ishiguro,et al.  Analysis of relationship between head motion events and speech in dialogue conversations , 2014, Speech Communication.

[28]  Björn Granström,et al.  Audio-Visual Prosody: Perception, Detection, and Synthesis of Prominence , 2010, COST 2102 Training School.

[29]  Zhigang Deng,et al.  Rigid Head Motion in Expressive Speech Animation: Analysis and Synthesis , 2007, IEEE Transactions on Audio, Speech, and Language Processing.

[30]  Stacy Marsella,et al.  How to Train Your Avatar: A Data Driven Approach to Gesture Generation , 2011, IVA.

[31]  Stefan Kopp,et al.  Real-Time Visual Prosody for Interactive Virtual Agents , 2015, IVA.

[32]  Scott McGlashan,et al.  Olga - a conversational agent with gestures , 2007 .

[33]  Yang Liu,et al.  Speech-Driven Animation Constrained by Appropriate Discourse Functions , 2014, ICMI.

[34]  Brent Lance,et al.  Emotionally Expressive Head and Body Movement During Gaze Shifts , 2007, IVA.

[35]  Louis-Philippe Morency,et al.  Parasocial consensus sampling: combining multiple perspectives to learn virtual human behavior , 2010, AAMAS.

[36]  Louis-Philippe Morency,et al.  Virtual Rapport 2.0 , 2011, IVA.

[37]  Uri Hadar,et al.  Kinematics of head movements accompanying speech during conversation , 1983 .

[38]  Stefan Kopp,et al.  Towards a Common Framework for Multimodal Generation: The Behavior Markup Language , 2006, IVA.

[39]  Atef Ben Youssef,et al.  Head Motion Analysis and Synthesis over Different Tasks , 2013, IVA.

[40]  Shrikanth Narayanan,et al.  Learning Expressive Human-Like Head Motion Sequences from Speech , 2008 .

[41]  Hans-Peter Seidel,et al.  Real-time lens blur effects and focus control , 2010, SIGGRAPH 2010.

[42]  Mark Steedman,et al.  Generating Facial Expressions for Speech , 1996, Cogn. Sci..

[43]  Stacy Marsella,et al.  Virtual Rapport , 2006, IVA.

[44]  Zhigang Deng,et al.  Natural head motion synthesis driven by acoustic prosodic features , 2005, Comput. Animat. Virtual Worlds.

[45]  Carlos Busso,et al.  Head Motion Generation with Synthetic Speech: A Data Driven Approach , 2016, INTERSPEECH.

[46]  Carlos Busso,et al.  Interrelation Between Speech and Facial Gestures in Emotional Utterances: A Single Subject Study , 2007, IEEE Transactions on Audio, Speech, and Language Processing.

[47]  Zhigang Deng,et al.  Live Speech Driven Head-and-Eye Motion Generators , 2012, IEEE Transactions on Visualization and Computer Graphics.

[48]  Stacy Marsella,et al.  Nonverbal Behavior Generator for Embodied Conversational Agents , 2006, IVA.

[49]  Christoph Bregler,et al.  Mood swings: expressive speech animation , 2005, TOGS.

[50]  Thomas Rist,et al.  The PPP persona: a multipurpose animated presentation agent , 1996, AVI '96.

[51]  Cynthia Breazeal,et al.  Regulation and Entrainment in Human—Robot Interaction , 2000, Int. J. Robotics Res..

[52]  Yuyu Xu,et al.  Virtual character performance from speech , 2013, SCA '13.

[53]  Erwin Marsi,et al.  Expressing uncertainty with a talking head in a multimodal question-answering system , 2007 .

[54]  Evelyn Z. McClave Linguistic functions of head movements in the context of speech , 2000 .

[55]  Zhigang Deng,et al.  Audio-based head motion synthesis for Avatar-based telepresence systems , 2004, ETP '04.

[56]  Sergey Levine,et al.  Gesture controllers , 2010, SIGGRAPH 2010.