Emotional facial expressions, eye behaviors, lips synchronization: current and future direction
暂无分享,去创建一个
[1] Richard S. Wallace,et al. The Anatomy of A.L.I.C.E. , 2009 .
[2] Mark Steedman,et al. Generating Facial Expressions for Speech , 1996, Cogn. Sci..
[3] Coriandre Vilain,et al. An experimental study of speech/gesture interactions and distance encoding , 2013, Speech Commun..
[4] Frédéric H. Pighin,et al. Expressive speech-driven facial animation , 2005, TOGS.
[5] P Dulguerov,et al. Review of objective topographic facial nerve evaluation methods. , 1999, The American journal of otology.
[6] Soraia Raupp Musse,et al. Reflecting User Faces in Avatars , 2010, IVA.
[7] Brent Lance,et al. The Relation between Gaze Behavior and the Attribution of Emotion: An Empirical Study , 2008, IVA.
[8] Luc Renambot,et al. Designing an Expressive Avatar of a Real Person , 2010, IVA.
[9] Li Zhang,et al. Intelligent facial emotion recognition and semantic-based topic detection for a humanoid robot , 2013, Expert Syst. Appl..
[10] Igor S. Pandzic,et al. Multimodal behavior realization for embodied conversational agents , 2011, Multimedia Tools and Applications.
[11] Françoise J. Prêteux,et al. On-Line Animation System for Learning and Practice Cued Speech , 2009, ICT Innovations.
[12] Norman I. Badler,et al. Where to Look? Automating Attending Behaviors of Virtual Human Characters , 1999, AGENTS '99.
[13] Beat Fasel,et al. Automati Fa ial Expression Analysis: A Survey , 1999 .
[14] N. Badler,et al. Eyes Alive Eyes Alive Eyes Alive Figure 1: Sample Images of an Animated Face with Eye Movements , 2022 .
[15] Aitor Arrieta,et al. High-Realistic and Flexible Virtual Presenters , 2010, AMDO.
[16] M. Ibbotson,et al. Visual perception and saccadic eye movements , 2011, Current Opinion in Neurobiology.
[17] Jeffrey F. Cohn. Advances in Behavioral Science Using Automated Facial Image Analysis and Synthesis [Social Sciences] , 2010, IEEE Signal Processing Magazine.
[18] Andrew Olney,et al. Gaze tutor: A gaze-reactive intelligent tutoring system , 2012, Int. J. Hum. Comput. Stud..
[19] Ahmad Hoirul Basori,et al. Eye, lip and crying expression for virtual human , 2012 .
[20] Jhing-Fa Wang,et al. Kernel-Based Lip Shape Clustering with Phoneme Recognition for Real-Time Voice Driven Talking Face , 2010, ISNN.
[21] Zheng Li,et al. EEMML: the emotional eye movement animation toolkit , 2011, Multimedia Tools and Applications.
[22] Marco Gillies. Piavca: A Framework for Heterogeneous Interactions with Virtual Characters , 2008, VR.
[23] Zen-Chung Shih,et al. A nonparametric regression model for virtual humans generation , 2010, Multimedia Tools and Applications.
[24] R. Mccall,et al. The Genetic and Environmental Origins of Learning Abilities and Disabilities in the Early School , 2007, Monographs of the Society for Research in Child Development.
[25] Justine Cassell,et al. BEAT: the Behavior Expression Animation Toolkit , 2001, Life-like characters.
[26] Mel Slater,et al. The impact of eye gaze on communication using humanoid avatars , 2001, CHI.
[27] Matthew Stone,et al. An anthropometric face model using variational techniques , 1998, SIGGRAPH.
[28] Catherine Pelachaud,et al. Expressive Gesture Model for Humanoid Robot , 2011, ACII.
[29] C. Darwin. The Expression of the Emotions in Man and Animals , .
[30] Narendra Patel,et al. 3D Facial model construction and expressions synthesis from a single frontal face image , 2010, 2010 International Conference on Computer and Communication Technology (ICCCT).
[31] Roel Vertegaal,et al. Effects of Gaze on Multiparty Mediated Communication , 2000, Graphics Interface.
[32] Keith Waters,et al. A muscle model for animation three-dimensional facial expression , 1987, SIGGRAPH.
[33] Hui Chen,et al. Phoneme-level articulatory animation in pronunciation training , 2012, Speech Commun..
[34] Dirk Heylen,et al. Bridging the Gap between Social Animal and Unsocial Machine: A Survey of Social Signal Processing , 2012, IEEE Transactions on Affective Computing.
[35] Gérard Bailly,et al. Gaze, conversational agents and face-to-face communication , 2010, Speech Commun..
[36] Stacy Marsella,et al. Nonverbal Behavior Generator for Embodied Conversational Agents , 2006, IVA.
[37] Binbin Tu,et al. Bimodal Emotion Recognition Based on Speech Signals and Facial Expression , 2011 .
[38] Fabio Pianesi,et al. Xface open source project and smil-agent scripting language for creating and animating embodied conversational agents , 2007, ACM Multimedia.
[39] Alexis Héloir,et al. Realizing Multimodal Behavior - Closing the Gap between Behavior Planning and Embodied Agent Presentation , 2010, IVA.
[40] Thomas Vetter,et al. A morphable model for the synthesis of 3D faces , 1999, SIGGRAPH.
[41] D. Nishimura. Graphically Speaking , 1999, Science.
[42] Hatice Gunes,et al. Continuous Prediction of Spontaneous Affect from Multiple Cues and Modalities in Valence-Arousal Space , 2011, IEEE Transactions on Affective Computing.
[43] Loïc Kessous,et al. Multimodal emotion recognition in speech-based interaction using facial expression, body gesture and acoustic analysis , 2010, Journal on Multimodal User Interfaces.
[44] Seongah Chin,et al. Multi‐layer structural wound synthesis on 3D face , 2011, Comput. Animat. Virtual Worlds.
[45] Hatice Gunes,et al. Automatic, Dimensional and Continuous Emotion Recognition , 2010, Int. J. Synth. Emot..
[46] Zhigang Deng,et al. Data-Driven 3D Facial Animation , 2007 .
[47] Fernando De la Torre,et al. Facial Expression Analysis , 2011, Visual Analysis of Humans.
[48] Radoslaw Niewiadomski,et al. Cross-media agent platform , 2011, Web3D '11.
[49] P. Ekman,et al. Facial Action Coding System: Manual , 1978 .
[50] Ken-ichi Anjyo,et al. Developing tools for 2D/3D conversion of Japanese animations , 2011, SIGGRAPH '11.
[51] Nanning Zheng,et al. Expression transfer for facial sketch animation , 2011, Signal Process..
[52] Björn W. Schuller,et al. Paralinguistics in speech and language - State-of-the-art and the challenge , 2013, Comput. Speech Lang..
[53] S. Goldin-Meadow,et al. Why people gesture when they speak , 1998, Nature.
[54] Anton Nijholt,et al. Eye gaze patterns in conversations: there is more to conversational agents than meets the eyes , 2001, CHI.
[55] M. Argyle,et al. Gaze and Mutual Gaze , 1994, British Journal of Psychiatry.
[56] Norman I. Badler,et al. Animating facial expressions , 1981, SIGGRAPH '81.
[57] Henrique S. Malvar,et al. Making Faces , 2019, Topoi.
[58] Alexandros Eleftheriadis,et al. MPEG-4 Systems: Overview , 2000, Signal Process. Image Commun..
[59] Igor S. Pandzic,et al. On creating multimodal virtual humans—real time speech driven facial gesturing , 2010, Multimedia Tools and Applications.
[60] Lianhong Cai,et al. Facial Expression Synthesis Based on Emotion Dimensions for Affective Talking Avatar , 2010, Modeling Machine Emotions for Realizing Intelligence.
[61] Mahmoud Neji,et al. Agent-based collaborative affective e-learning system , 2007, IMMERSCOM.
[62] Nadia Magnenat-Thalmann,et al. Fast head modeling for animation , 2000, Image Vis. Comput..
[63] Ioannis Arapakis,et al. Theories, methods and current research on emotions in library and information science, information retrieval and human-computer interaction , 2011, Inf. Process. Manag..
[64] Matthew Stone,et al. Speaking with hands: creating animated conversational characters from recordings of human performance , 2004, ACM Trans. Graph..
[65] Fakhri Karray,et al. Survey on speech emotion recognition: Features, classification schemes, and databases , 2011, Pattern Recognit..
[66] Zhihong Zeng,et al. A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions , 2009, IEEE Trans. Pattern Anal. Mach. Intell..
[67] Yun Fu,et al. Human-Centered Face Computing in Multimedia Interaction and Communication , 2010, Intelligent Multimedia Communication.
[68] Frank K. Soong,et al. Synthesizing photo-real talking head via trajectory-guided sample selection , 2010, INTERSPEECH.
[69] Ari Shapiro,et al. Building a Character Animation System , 2011, MIG.
[70] Marius Preda,et al. Avatar interoperability and control in virtual Worlds , 2013, Signal Process. Image Commun..
[71] S. Drucker,et al. The Role of Eye Gaze in Avatar Mediated Conversational Interfaces , 2000 .
[72] Stefan Kopp,et al. Towards a Common Framework for Multimodal Generation: The Behavior Markup Language , 2006, IVA.
[73] Daniel Thalmann,et al. SMILE: A Multilayered Facial Animation System , 1991, Modeling in Computer Graphics.
[74] Takeo Kanade,et al. Automated facial expression recognition based on FACS action units , 1998, Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition.
[75] Catherine Pelachaud,et al. Studies on gesture expressivity for a virtual agent , 2009, Speech Commun..
[76] Soraia Raupp Musse,et al. An extensible framework for interactive facial animation with facial expressions, lip synchronization and eye behavior , 2009, CIE.
[77] Verónica Orvalho,et al. A Proposal for a Visual Speech Animation System for European Portuguese , 2012, IberSPEECH.
[78] Mel Slater,et al. Building Expression into Virtual Characters , 2006, Eurographics.
[79] Gabriel Skantze,et al. IrisTK: a statechart-based toolkit for multi-party face-to-face interaction , 2012, ICMI '12.
[80] Stacy Marsella,et al. SmartBody: behavior realization for embodied conversational agents , 2008, AAMAS.
[81] John B.P. Stephenson. The Neurology of Eye Movements, Book-and-DVD Package, 4th ed., R. John Leigh, David S. Zee, Oxford University Press (2006). Hardback, 776pp., price £88.00, ISBN-10: 0-19-530090-4, ISBN-13: 978-0-19-530090-1 , 2007 .
[82] Brian S. Schnitzer,et al. Eye movements and attention: The role of pre-saccadic shifts of attention in perception, memory and the control of saccades , 2012, Vision Research.
[83] Mark Grimshaw,et al. Facial expression of emotion and perception of the Uncanny Valley in virtual characters , 2011, Comput. Hum. Behav..
[84] Hugo Quené,et al. Audible smiles and frowns affect speech comprehension , 2012, Speech Commun..
[85] Bogdan Raducanu,et al. Facial expression recognition using tracked facial actions: Classifier performance analysis , 2013, Eng. Appl. Artif. Intell..
[86] C. Darwin,et al. The Expression of the Emotions in Man and Animals , 1956 .
[87] Hassan Ugail,et al. On the Development of a Talking Head System Based on the Use of PDE-Based Parametic Surfaces , 2011, Trans. Comput. Sci..
[88] A. L. Yarbus. Eye Movements During Perception of Complex Objects , 1967 .
[89] Eileen Kowler. Eye movements: The past 25years , 2011, Vision Research.
[90] José Mario De Martino,et al. Benchmarking Speech Synchronized Facial Animation Based on Context-Dependent Visemes , 2007 .
[91] Keith Waters,et al. Computer facial animation , 1996 .
[92] Brent Lance,et al. Emotionally Expressive Head and Body Movement During Gaze Shifts , 2007, IVA.
[93] Igor S. Pandzic,et al. A Controller-Based Animation System for Synchronizing and Realizing Human-Like Conversational Behaviors , 2009, COST 2102 Training School.
[94] Parke,et al. Parameterized Models for Facial Animation , 1982, IEEE Computer Graphics and Applications.
[95] S. Baron-Cohen,et al. Does the autistic child have a “theory of mind” ? , 1985, Cognition.
[96] Junichi Yamagishi,et al. Speech-driven lip motion generation with a trajectory HMM , 2008, INTERSPEECH.
[97] Satoshi Nakamura,et al. Model-based talking face synthesis for anthropomorphic spoken dialog agent system , 2003, MULTIMEDIA '03.