Look me in the Eyes: A Survey of Eye and Gaze Animation for Virtual Agents and Artificial Systems
暂无分享,去创建一个
Norman I. Badler | Sean Andrist | Bilge Mutlu | Michael Gleicher | Christopher E. Peters | Rachel McDonnell | Jeremy B. Badler | Kerstin Ruhland | Michael Gleicher | N. Badler | J. Badler | Bilge Mutlu | Sean Andrist | R. McDonnell | K. Ruhland | R. Mcdonnell
[1] Daniel Thalmann,et al. Coordinating the Generation of Signs in Multiple Modalities in an Affective Agent , 2011 .
[2] Matthew W. Crocker,et al. The effect of robot gaze on processing robot utterances , 2009 .
[3] Norihiro Hagita,et al. Messages embedded in gaze of interface agents --- impression management with agent's gaze , 2002, CHI.
[4] S. Brennan,et al. Speakers' eye gaze disambiguates referring expressions early during face-to-face conversation , 2007 .
[5] H. H. Clark,et al. Speaking while monitoring addressees for understanding , 2004 .
[6] Christophe Garcia,et al. Modeling gaze behavior for a 3D ECA in a dialogue situation , 2006, IUI '06.
[7] Norman I. Badler,et al. Eye Movements, Saccades, and Multiparty Conversations , 2008 .
[8] Justine Cassell,et al. Fully Embodied Conversational Avatars: Making Communicative Behaviors Autonomous , 1999, Autonomous Agents and Multi-Agent Systems.
[9] A. Abele,et al. Functions of gaze in social interaction: Communication and monitoring , 1986 .
[10] Dinesh K. Pai,et al. Eyecatch: simulating visuomotor coordination for object interception , 2012, ACM Trans. Graph..
[11] Christopher E. Peters,et al. Fundamentals of Agent Perception and Attention Modelling , 2011 .
[12] J. Cassell,et al. Embodied conversational agents , 2000 .
[13] Ari Shapiro,et al. Building a Character Animation System , 2011, MIG.
[14] C. Evinger,et al. Eyelid movements. Mechanisms and normal data. , 1991, Investigative ophthalmology & visual science.
[15] T. Allison,et al. Electrophysiological Studies of Face Perception in Humans , 1996, Journal of Cognitive Neuroscience.
[16] Marilyn A. Walker,et al. Bossy or Wimpy: Expressing Social Dominance by Combining Gaze and Linguistic Behaviors , 2010, IVA.
[17] W. T. Norman,et al. Toward an adequate taxonomy of personality attributes: replicated factors structure in peer nomination personality ratings. , 1963, Journal of abnormal and social psychology.
[18] J. M. Kittross. The measurement of meaning , 1959 .
[19] C. Osgood,et al. The Measurement of Meaning , 1958 .
[20] Christopher E. Peters,et al. Bottom-up visual attention for virtual human animation , 2003, Proceedings 11th IEEE International Workshop on Program Comprehension.
[21] V. Yngve. On getting a word in edgewise , 1970 .
[22] Takayuki Kanda,et al. Footing in human-robot conversations: How robots might shape participant roles using gaze cues , 2009, 2009 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI).
[23] Demetri Terzopoulos,et al. Artificial fishes: physics, locomotion, perception, behavior , 1994, SIGGRAPH.
[24] Martin Breidt,et al. Face reality: investigating the Uncanny Valley for virtual faces , 2010, SIGGRAPH ASIA.
[25] J. P. Otteson,et al. Effect of Teacher's Gaze on Children's Story Recall , 1980 .
[26] Carol O'Sullivan,et al. Clone attack! Perception of crowd variety , 2008, SIGGRAPH 2008.
[27] N. Badler,et al. Eyes Alive Eyes Alive Eyes Alive Figure 1: Sample Images of an Animated Face with Eye Movements , 2022 .
[28] Heinrich H. Bülthoff,et al. Render me real? , 2012, ACM Trans. Graph..
[29] C. Kleinke. Gaze and eye contact: a research review. , 1986, Psychological bulletin.
[30] Michael D. Buhrmester,et al. Amazon's Mechanical Turk , 2011, Perspectives on psychological science : a journal of the Association for Psychological Science.
[31] Hannes Högni Vilhjálmsson,et al. Animating Idle Gaze in Public Places , 2009, IVA.
[32] Nick Fogt,et al. The Neurology of Eye Movements, 3rd ed. , 2000 .
[33] S. Drucker,et al. The Role of Eye Gaze in Avatar Mediated Conversational Interfaces , 2000 .
[34] R. Wurtz,et al. The Neurobiology of Saccadic Eye Movements , 1989 .
[35] Makoto Sato,et al. Reactive Virtual Human with Bottom-up and Top-down Visual Attention for Gaze Generation in Realtime Interactions , 2007, 2007 IEEE Virtual Reality Conference.
[36] Irene Albrecht,et al. Automatic Generation of Non-Verbal Facial Expressions from Speech , 2002 .
[37] J. Loomis,et al. Interpersonal Distance in Immersive Virtual Environments , 2003, Personality & social psychology bulletin.
[38] Sean Andrist,et al. Conversational Gaze Aversion for Humanlike Robots , 2014, 2014 9th ACM/IEEE International Conference on Human-Robot Interaction (HRI).
[39] Robin L. Hill,et al. Referring and gaze alignment: accessibility is alive and well in situated dialogue , 2009 .
[40] Brent Lance,et al. The Rickel Gaze Model: A Window on the Mind of a Virtual Human , 2007, IVA.
[41] Bilge Mutlu,et al. Stylized and Performative Gaze for Character Animation , 2013, Comput. Graph. Forum.
[42] Jens Edlund,et al. Taming Mona Lisa: Communicating gaze faithfully in 2D and 3D facial projections , 2012, TIIS.
[43] F. Wilcoxon. Individual Comparisons by Ranking Methods , 1945 .
[44] A. Kendon. Conducting Interaction: Patterns of Behavior in Focused Encounters , 1990 .
[45] Yuichiro Yoshikawa,et al. Responsive Robot Gaze to Interaction Partner , 2006, Robotics: Science and Systems.
[46] Zhigang Deng,et al. Realistic Eye Motion Synthesis by Texture Synthesis , 2008 .
[47] M. Cary. The Role of Gaze in the Initiation of Conversation , 1978 .
[48] D. C. Howell. Statistical Methods for Psychology , 1987 .
[49] Christopher E. Peters. Evaluating Perception of Interaction Initiation in Virtual Environments Using Humanoid Agents , 2006, ECAI.
[50] Y. Nakano,et al. Gaze and Conversation Dominance in Multiparty Interaction , 2011 .
[51] Laurent Itti,et al. Realistic avatar eye and head animation using a neurobiological model of visual attention , 2004, SPIE Optics + Photonics.
[52] Anthony Steed,et al. Short Paper: Exploring the Object Relevance of a Gaze Animation Model , 2011, EGVE/EuroVR.
[53] Mohammad Obaid,et al. Cultural Behaviors of Virtual Agents in an Augmented Reality Environment , 2012, IVA.
[54] Sean Andrist,et al. Designing effective gaze mechanisms for virtual agents , 2012, CHI.
[55] L. Stark,et al. The main sequence, a tool for studying human eye movements , 1975 .
[56] Yuyu Xu,et al. Virtual character performance from speech , 2013, SCA '13.
[57] Dirk Heylen,et al. Head Gestures, Gaze and the Principles of Conversational Structure , 2006, Int. J. Humanoid Robotics.
[58] J. Fuller,et al. Head movement propensity , 2004, Experimental Brain Research.
[59] A. Mehrabian. Basic Dimensions For A General Psychological Theory , 1980 .
[60] Brent Lance,et al. Glances, glares, and glowering: how should a virtual human express emotion through gaze? , 2009, Autonomous Agents and Multi-Agent Systems.
[61] Jeremy N. Bailenson,et al. Equilibrium Theory Revisited: Mutual Gaze and Personal Space in Virtual Environments , 2001, Presence: Teleoperators & Virtual Environments.
[62] Soraia Raupp Musse,et al. Providing expressive gaze to virtual animated characters in interactive applications , 2008, CIE.
[63] Dirk Heylen,et al. First Impressions: Users' Judgments of Virtual Agents' Personality and Interpersonal Attitude in First Encounters , 2012, IVA.
[64] Gérard Bailly,et al. Scrutinizing Natural Scenes: Controlling the Gaze of an Embodied Conversational Agent , 2007, IVA.
[65] Christopher E. Peters,et al. A head movement propensity model for animating gaze shifts and blinks of virtual characters , 2010, Comput. Graph..
[66] Tony Belpaeme,et al. A study of a retro-projected robotic face and its effectiveness for gaze reading by humans , 2010, HRI 2010.
[67] Brent Lance,et al. Emotionally Expressive Head and Body Movement During Gaze Shifts , 2007, IVA.
[68] Soraia Raupp Musse,et al. Automatic Generation of Expressive Gaze in Virtual Animated Characters: From Artists Craft to a Behavioral Animation Model , 2007, IVA.
[69] Marina L. Gavrilova,et al. Iris synthesis: a reverse subdivision application , 2005, GRAPHITE.
[70] Stefan Kopp,et al. Towards a Common Framework for Multimodal Generation: The Behavior Markup Language , 2006, IVA.
[71] Robin R. Murphy,et al. A survey of social gaze , 2011, 2011 6th ACM/IEEE International Conference on Human-Robot Interaction (HRI).
[72] Zhigang Deng,et al. Rigid Head Motion in Expressive Speech Animation: Analysis and Synthesis , 2007, IEEE Transactions on Audio, Speech, and Language Processing.
[73] Catherine Pelachaud,et al. Eye Communication in a Conversational 3D Synthetic Agent , 2000, AI Commun..
[74] Irwin Silverman,et al. Pupillometry: A sexual selection approach , 2004 .
[75] Justine Cassell,et al. BEAT: the Behavior Expression Animation Toolkit , 2001, Life-like characters.
[76] Lee Ann Remington,et al. Comprar Clinical Anatomy and Physiology of the Visual System, 3rd Edition | Lee Ann Remington | 9781437719260 | BUTTERWORTH , 2011 .
[77] Kadi Bouatouch,et al. Image-Based Modeling of the Human Eye , 2009, IEEE Transactions on Visualization and Computer Graphics.
[78] D Guitton,et al. Upper eyelid movements measured with a search coil during blinks and vertical saccades. , 1991, Investigative ophthalmology & visual science.
[79] David G. Novick,et al. A Computational Model of Culture-Specific Conversational Behavior , 2007, IVA.
[80] Darren Gergle,et al. See what i'm saying?: using Dyadic Mobile Eye tracking to study collaborative reference , 2011, CSCW.
[81] Yukiko I. Nakano,et al. Effectiveness of Gaze-Based Engagement Estimation in Conversational Agents , 2013, Eye Gaze in Intelligent User Interfaces.
[82] Elisabeth André,et al. Breaking the Ice in Human-Agent Communication: Eye-Gaze Based Initiation of Contact with an Embodied Conversational Agent , 2009, IVA.
[83] G. Sjøgaard,et al. Eye blink frequency during different computer tasks quantified by electrooculography , 2006, European Journal of Applied Physiology.
[84] Radoslaw Niewiadomski,et al. Computational Models of Expressive Behaviors for a Virtual Agent. , 2012 .
[85] Elisabetta Bevacqua,et al. A Model of Attention and Interest Using Gaze Behavior , 2005, IVA.
[86] S. Duncan,et al. On the structure of speaker–auditor interaction during speaking turns , 1974, Language in Society.
[87] J. Stern,et al. The endogenous eyeblink. , 1984, Psychophysiology.
[88] Brigitte Krenn,et al. Embodied Conversational Characters: Representation Formats for Multimodal Communicative Behaviours , 2011 .
[89] S. Gosling,et al. A very brief measure of the Big-Five personality domains , 2003 .
[90] Laurent Itti,et al. Photorealistic Attention-Based Gaze Animation , 2006, 2006 IEEE International Conference on Multimedia and Expo.
[91] E OppenheimerPeter,et al. Real time design and animation of fractal plants and trees , 1986 .
[92] Nadia Magnenat-Thalmann,et al. Realistic Emotional Gaze and Head Behavior Generation Based on Arousal and Dominance Factors , 2010, MIG.
[93] Matej Rojc,et al. A Survey of Listener Behavior and Listener Models for Embodied Conversational Agents , 2013 .
[94] Catherine Pelachaud,et al. Modelling Gaze Behaviour for Conversational Agents , 2003, IVA.
[95] P. Ekman,et al. Unmasking the face : a guide to recognizing emotions from facial clues , 1975 .
[96] Robin J. S. Sloan,et al. Using virtual agents to cue observer attention , 2010 .
[97] J. Burgoon,et al. Nonverbal Communication: The Unspoken Dialogue , 1988 .
[98] L. Stark,et al. Most naturally occurring human saccades have magnitudes of 15 degrees or less. , 1975, Investigative ophthalmology.
[99] Ning Wang,et al. Creating Rapport with Virtual Agents , 2007, IVA.
[100] R. Likert. “Technique for the Measurement of Attitudes, A” , 2022, The SAGE Encyclopedia of Research Design.
[101] Susan R. Fussell,et al. Comparing a computer agent with a humanoid robot , 2007, 2007 2nd ACM/IEEE International Conference on Human-Robot Interaction (HRI).
[102] Michele A. Basso,et al. Not looking while leaping: the linkage of blinking and saccadic gaze shifts , 2004, Experimental Brain Research.
[103] Bilge Mutlu,et al. Human-robot proxemics: Physical and psychological distancing in human-robot interaction , 2011, 2011 6th ACM/IEEE International Conference on Human-Robot Interaction (HRI).
[104] E. Goffman. Behavior in Public Places , 1963 .
[105] Dirk Heylen,et al. Generating Nonverbal Signals for a Sensitive Artificial Listener , 2007, COST 2102 Workshop.
[106] D. Gática-Pérez. Modelling Interest in Face-to-Face Conversations from Multimodal Nonverbal Behaviour , 2010 .
[107] Maja Pantic,et al. This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. IEEE TRANSACTIONS ON AFFECTIVE COMPUTING , 2022 .
[108] C. Anderson,et al. PVT lapses differ according to eyes open, closed, or looking away. , 2010, Sleep.
[109] Yukiko I. Nakano,et al. Avatar's Gaze Control to Facilitate Conversational Turn-Taking in Virtual-Space Multi-user Voice Chat System , 2006, IVA.
[110] J. Horton. The Neurobiology of Saccadic Eye Movements , 1990 .
[111] S G Lisberger,et al. Visual motion processing for the initiation of smooth-pursuit eye movements in humans. , 1986, Journal of neurophysiology.
[112] John P. Lewis,et al. Automated eye motion using texture synthesis , 2005, IEEE Computer Graphics and Applications.
[113] Catharine Oertel,et al. Gaze direction as a Back-Channel inviting Cue in Dialogue , 2012 .
[114] Manuel Menezes de Oliveira Neto,et al. Photorealistic models for pupil light reflex and iridal pattern deformation , 2009, TOGS.
[115] Timothy W. Bickmore,et al. Changes in verbal and nonverbal conversational behavior in long-term interaction , 2012, ICMI '12.
[116] Makoto Kato,et al. Blink-related momentary activation of the default mode network while viewing videos , 2012, Proceedings of the National Academy of Sciences.
[117] W. Becker. The neurobiology of saccadic eye movements. Metrics. , 1989, Reviews of oculomotor research.
[118] Jessica K. Hodgins,et al. The saliency of anomalies in animated human characters , 2010, TAP.
[119] D. Guitton,et al. Gaze control in humans: eye-head coordination during orienting movements to targets within and beyond the oculomotor range. , 1987, Journal of neurophysiology.
[120] S. Brennan. Eye gaze cues for coordination in collaborative tasks , 2011 .
[121] Anthony Steed,et al. High-Fidelity Avatar Eye-Representation , 2008, 2008 IEEE Virtual Reality Conference.
[122] Panagiotis G. Ipeirotis,et al. Running Experiments on Amazon Mechanical Turk , 2010, Judgment and Decision Making.
[123] Mel Slater,et al. The impact of avatar realism and eye gaze control on perceived quality of communication in a shared immersive virtual environment , 2003, CHI '03.
[124] Stefan Kopp,et al. The Behavior Markup Language: Recent Developments and Challenges , 2007, IVA.
[125] Sean Andrist,et al. Conversational Gaze Aversion for Virtual Agents , 2013, IVA.
[126] John H. Anderson,et al. OCULAR TORSION IN THE CAT AFTER LESIONS OF THE INTERSTITIAL NUCLEUS OF CAJAL * , 1981, Annals of the New York Academy of Sciences.
[127] Marc Cavazza,et al. Gaze behavior during interaction with a virtual character in interactive storytelling , 2010 .
[128] Norman I. Badler,et al. Visual Attention and Eye Gaze During Multiparty Conversations with Distractions , 2006, IVA.
[129] Norman I. Badler,et al. Evaluating perceived trust from procedurally animated gaze , 2013, MIG.
[130] Herwin van Welbergen,et al. Designing Appropriate Feedback for Virtual Agents and Robots. , 2012, HRI 2012.
[131] Eric Horvitz,et al. Facilitating multiparty dialog with gaze, gesture, and speech , 2010, ICMI-MLMI '10.
[132] Norman I. Badler,et al. Eyes alive , 2002, ACM Trans. Graph..
[133] Brian Scassellati,et al. The Benefits of Interactions with Physically Present Robots over Video-Displayed Agents , 2011, Int. J. Soc. Robotics.
[134] Natalia A. Schmid,et al. A Model Based, Anatomy Based Method for Synthesizing Iris Images , 2006, ICB.
[135] Hannes Högni Vilhjálmsson. Animating Conversation in Online Games , 2004, ICEC.
[136] Stacy Marsella,et al. Expressive Behaviors for Virtual Worlds , 2004, Life-like characters.
[137] Kostas Karpouzis,et al. Investigating shared attention with a virtual agent using a gaze-based interface , 2010, Journal on Multimodal User Interfaces.
[138] Heloir,et al. The Uncanny Valley , 2019, The Animation Studies Reader.
[139] Dirk Heylen,et al. Gaze behaviour, believability, likability and the iCat , 2009, AI & SOCIETY.
[140] Peter Oppenheimer,et al. Real time design and animation of fractal plants and trees , 1986, SIGGRAPH.
[141] Jörn Ostermann,et al. Video-realistic image-based eye animation via statistically driven state machines , 2010, The Visual Computer.
[142] B. Fischer,et al. Human express saccades: extremely short reaction times of goal directed eye movements , 2004, Experimental Brain Research.
[143] Sean Andrist,et al. A head-eye coordination model for animating gaze shifts of virtual characters , 2012, Gaze-In '12.
[144] A. Kendon. Some functions of gaze-direction in social interaction. , 1967, Acta psychologica.
[145] Qianli Xu,et al. Designing engagement-aware agents for multiparty conversations , 2013, CHI.
[146] M. Aramideh,et al. Eyelid movements: behavioral studies of blinking in humans under different stimulus conditions. , 2003, Journal of neurophysiology.
[147] J. Cassell,et al. Turn taking vs. Discourse Structure: How Best to Model Multimodal Conversation , 1998 .
[148] Christopher E. Peters. Direction of Attention Perception for Conversation Initiation in Virtual Environments , 2005, IVA.
[149] Bilge Mutlu,et al. Modeling and Evaluating Narrative Gestures for Humanlike Robots , 2013, Robotics: Science and Systems.
[150] Iain Matthews,et al. Modeling and animating eye blinks , 2011, TAP.
[151] M. Bradley,et al. The pupil as a measure of emotional arousal and autonomic activation. , 2008, Psychophysiology.
[152] Andrew Tucker,et al. Monitoring eye and eyelid movements by infrared reflectance oculography to measure drowsiness in drivers , 2007 .
[153] Carol O'Sullivan,et al. Eye-catching crowds: saliency based selective variation , 2009, SIGGRAPH 2009.
[154] P. Moran. On the method of paired comparisons. , 1947, Biometrika.
[155] Anthony Steed,et al. Modelling selective visual attention for autonomous virtual characters , 2011, Comput. Animat. Virtual Worlds.
[156] Anthony Steed,et al. A saliency-based method of simulating visual attention in virtual scenes , 2009, VRST '09.
[157] C. Koch,et al. Models of bottom-up and top-down visual attention , 2000 .
[158] Ning Wang,et al. Don't just stare at me! , 2010, CHI.
[159] Justine Cassell,et al. BodyChat: autonomous communicative behaviors in avatars , 1998, AGENTS '98.
[160] Anthony Steed,et al. An assessment of eye-gaze potential within immersive virtual environments , 2007, TOMCCAP.
[161] Brent Lance,et al. The Expressive Gaze Model: Using Gaze to Express Emotion , 2010, IEEE Computer Graphics and Applications.
[162] M. Doughty. Consideration of Three Types of Spontaneous Eyeblink Activity in Normal Humans: during Reading and Video Display Terminal Use, in Primary Gaze, and while in Conversation , 2001, Optometry and vision science : official publication of the American Academy of Optometry.
[163] Stacy Marsella,et al. Towards Expressive Gaze Manner in Embodied Virtual Agents , 2004 .
[164] Neil A. Dodgson,et al. Eye movements and attention for behavioural animation , 2002, Comput. Animat. Virtual Worlds.
[165] Junichi Hoshino,et al. Head‐eye Animation Corresponding to a Conversation for CG Characters , 2007, Comput. Graph. Forum.
[166] Mel Slater,et al. An Eye Gaze Model for Dyadic Interaction in an Immersive Virtual Environment: Practice and Experience , 2004, Comput. Graph. Forum.
[167] K. Kroschel,et al. Evaluation of natural emotions using self assessment manikins , 2005, IEEE Workshop on Automatic Speech Recognition and Understanding, 2005..
[168] S. Tipper,et al. Gaze cueing of attention: visual attention, social cognition, and individual differences. , 2007, Psychological bulletin.
[169] Ludovic Hoyet,et al. Evaluating the effect of emotion on gender recognition in virtual humans , 2013, SAP.
[170] Peter R. Greene,et al. Gaussian and Poisson Blink Statistics: A Preliminary Study , 1986, IEEE Transactions on Biomedical Engineering.
[171] Norman I. Badler,et al. Where to Look? Automating Attending Behaviors of Virtual Human Characters , 1999, Agents.
[172] D. Robinson,et al. The vestibulo‐ocular reflex during human saccadic eye movements. , 1986, The Journal of physiology.
[173] Erik Reinhard,et al. An Ocularist's Approach to Human Iris Synthesis , 2003, IEEE Computer Graphics and Applications.
[174] Oleg V. Komogortsev,et al. 2D Linear oculomotor plant mathematical model , 2013, ACM Trans. Appl. Percept..
[175] Anthony Steed,et al. Eyelid kinematics for virtual characters , 2010 .
[176] C. Izard. The psychology of emotions , 1991 .
[177] J. Stahl,et al. Amplitude of human head movements associated with horizontal saccades , 1999, Experimental Brain Research.
[178] Justine Cassell,et al. Intersubjectivity in humanagent interaction , 2007 .
[179] Cynthia Breazeal,et al. Effect of a robot on user perceptions , 2004, 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566).
[180] Sven Behnke,et al. Integrating vision and speech for conversations with multiple persons , 2005, 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems.
[181] Zhigang Deng,et al. Natural Eye Motion Synthesis by Modeling Gaze-Head Coupling , 2009, 2009 IEEE Virtual Reality Conference.
[182] Stefan Kopp,et al. Using Virtual Agents to Guide Attention in Multi-task Scenarios , 2013, IVA.
[183] Yukiko I. Nakano,et al. Estimating user's engagement from eye-gaze behaviors in human-agent conversations , 2010, IUI '10.
[184] Gladimir V. G. Baranoski,et al. A Predictive Light Transport Model for the Human Iris , 2006, Comput. Graph. Forum.
[185] Peter J. Hunter,et al. A virtual environment and model of the eye for surgical simulation , 1994, SIGGRAPH.
[186] Helena Grillon,et al. Simulating gaze attention behaviors for crowds , 2009 .
[187] Carlos Busso,et al. Generating Human-Like Behaviors Using Joint, Speech-Driven Models for Conversational Agents , 2012, IEEE Transactions on Audio, Speech, and Language Processing.
[188] R. Leigh,et al. The neurology of eye movements , 1984 .
[189] Elisabetta Bevacqua,et al. A Survey of Listener Behavior and Listener Models for Embodied Conversational Agents , 2013 .
[190] J. P. Morgan,et al. Design and Analysis: A Researcher's Handbook , 2005, Technometrics.
[191] Kai Vogeley,et al. The effects of self-involvement on attention, arousal, and facial expression during social interaction with virtual others: A psychophysiological study , 2006, Social neuroscience.
[192] Shrikanth Narayanan,et al. Learning Expressive Human-Like Head Motion Sequences from Speech , 2008 .
[193] Nicole Chovil. Discourse‐oriented facial displays in conversation , 1991 .
[194] K. Shirai,et al. Controlling gaze of humanoid in communication with human , 1998, Proceedings. 1998 IEEE/RSJ International Conference on Intelligent Robots and Systems. Innovations in Theory, Practice and Applications (Cat. No.98CH36190).
[195] S. Baron-Cohen. How to build a baby that can read minds: Cognitive mechanisms in mindreading. , 1994 .
[196] M. Kendall,et al. ON THE METHOD OF PAIRED COMPARISONS , 1940 .
[197] Christopher E. Peters. Animating Gaze Shifts for Virtual Characters Based on Head Movement Propensity , 2010, 2010 Second International Conference on Games and Virtual Worlds for Serious Applications.
[198] L. Remington. Clinical Anatomy and Physiology of the Visual System, 3rd Edition , 1997 .
[199] Ana Paiva,et al. Providing Gender to Embodied Conversational Agents , 2011, International Conference on Intelligent Virtual Agents.
[200] Mark Steedman,et al. Animated conversation: rule-based generation of facial expression, gesture & spoken intonation for multiple conversational agents , 1994, SIGGRAPH.
[201] M. Argyle,et al. Gaze and Mutual Gaze , 1994, British Journal of Psychiatry.
[202] Brent Lance,et al. Real-time expressive gaze animation for virtual humans , 2009, AAMAS.
[203] Shigeru Kitazawa,et al. Eyeblink entrainment at breakpoints of speech , 2010, Neuroscience Research.
[204] Zhigang Deng,et al. Live Speech Driven Head-and-Eye Motion Generators , 2012, IEEE Transactions on Visualization and Computer Graphics.
[205] Xia Mao,et al. Emotional eye movement generation based on Geneva Emotion Wheel for virtual agents , 2012, J. Vis. Lang. Comput..
[206] Kristinn R. Thórisson,et al. The Power of a Nod and a Glance: Envelope Vs. Emotional Feedback in Animated Conversational Agents , 1999, Appl. Artif. Intell..
[207] John Funge,et al. Cognitive modeling: knowledge, reasoning and planning for intelligent characters , 1999, SIGGRAPH.
[208] Igor S. Pandzic,et al. On creating multimodal virtual humans—real time speech driven facial gesturing , 2010, Multimedia Tools and Applications.