Towards ECA's Animation of Expressive Complex Behaviour
暂无分享,去创建一个
[1] L. Carney,et al. THE NATURE OF NORMAL BLINKING PATTERNS , 1982, Acta ophthalmologica.
[2] Mark Steedman,et al. Generating Facial Expressions for Speech , 1996, Cogn. Sci..
[3] Hans Peter Graf,et al. Sample-based synthesis of photo-realistic talking heads , 1998, Proceedings Computer Animation '98 (Cat. No.98EX169).
[4] Igor S. Pandzic,et al. MPEG-4 Facial Animation , 2002 .
[5] Felix A. Fischer,et al. An integrated framework for adaptive reasoning about conversation patterns , 2005, AAMAS '05.
[6] F. J. Clark,et al. On the regulation of depth and rate of breathing , 1972, The Journal of physiology.
[7] Constantine Stephanidis,et al. Universal Access in Human-Computer Interaction , 2011 .
[8] C. Nass,et al. Truth is beauty: researching embodied conversational agents , 2001 .
[9] C. Pelachaud,et al. GRETA. A BELIEVABLE EMBODIED CONVERSATIONAL AGENT , 2005 .
[10] Matej Rojc,et al. EVA: expressive multipart virtual agent performing gestures and emotions , 2011 .
[11] Leo Obrst,et al. The Semantic Web: A Guide to the Future of XML, Web Services and Knowledge Management , 2003 .
[12] Igor S. Pandzic,et al. [HUGE]: Universal Architecture for Statistically Based HUman GEsturing , 2006, IVA.
[13] Patrick Gebhard,et al. ALMA: a layered model of affect , 2005, AAMAS '05.
[14] Igor S. Pandzic,et al. Towards real-time speech-based facial animation applications built on HUGE architecture , 2008, AVSP.
[15] Michael Neff,et al. An annotation scheme for conversational gestures: how to economically capture timing and form , 2007, Lang. Resour. Evaluation.
[16] D. Schroeder,et al. Blink Rate: A Possible Measure of Fatigue , 1994, Human factors.
[17] Witold Pedrycz,et al. Ambient Intelligence, Wireless Networking, And Ubiquitous Computing , 2006 .
[18] Jan L. G. Dietz,et al. The pragmatic web: a manifesto , 2006, CACM.
[19] Ipke Wachsmuth,et al. Modelling Communication with Robots and Virtual Humans , 2008 .
[20] A. Bentivoglio,et al. Analysis of blink rate patterns in normal subjects , 1997, Movement disorders : official journal of the Movement Disorder Society.
[21] Maurizio Mancini,et al. Levels of Representation in the Annotation of Emotion for the Specification of Expressivity in ECAs , 2005, IVA.
[22] Constantine Stephanidis. Intelligent and ubiquitous interaction environments , 2009 .
[23] Alexis Héloir,et al. Realizing Multimodal Behavior - Closing the Gap between Behavior Planning and Embodied Agent Presentation , 2010, IVA.
[24] Jörn Ostermann,et al. Animation of synthetic faces in MPEG-4 , 1998, Proceedings Computer Animation '98 (Cat. No.98EX169).
[25] Stefan Kopp,et al. Modeling Embodied Feedback with Virtual Humans , 2006, ZiF Workshop.
[26] Kristiina Jokinen,et al. Gaze and Gesture Activity in Communication , 2009, HCI.
[27] Johan F. Hoorn,et al. ECA Perspectives - Requirements, Applications, Technology , 2004, Evaluating Embodied Conversational Agents.
[28] J. Cassell,et al. Embodied conversational agents , 2000 .
[29] Mark Steedman,et al. APML, a Markup Language for Believable Behavior Generation , 2004, Life-like characters.
[30] Mark R. Mine,et al. The Panda3D Graphics Engine , 2004, Computer.
[31] Radoslaw Niewiadomski,et al. Multimodal Complex Emotions: Gesture Expressivity and Blended Facial Expressions , 2006, Int. J. Humanoid Robotics.
[32] Irene Albrecht,et al. Automatic Generation of Non-Verbal Facial Expressions from Speech , 2002 .
[33] Christoph Bregler,et al. Mood swings: expressive speech animation , 2005, TOGS.
[34] Algirdas Pakstas,et al. MPEG-4 Facial Animation: The Standard,Implementation and Applications , 2002 .
[35] Keiichi Tokuda,et al. Text-to-visual speech synthesis based on parameter generation from HMM , 1998, Proceedings of the 1998 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP '98 (Cat. No.98CH36181).
[36] Fumio Harashima,et al. Natural Interface Using Pointing Behavior for Human–Robot Gestural Interaction , 2007, IEEE Transactions on Industrial Electronics.
[37] Matej Rojc,et al. Time and space-efficient architecture for a corpus-based text-to-speech synthesis system , 2007, Speech Commun..
[38] Stefan Kopp,et al. Towards a Common Framework for Multimodal Generation: The Behavior Markup Language , 2006, IVA.
[39] George N. Votsis,et al. Emotion recognition in human-computer interaction , 2001, IEEE Signal Process. Mag..
[40] Francisco J. Serón,et al. Maxine: A platform for embodied animated agents , 2008, Comput. Graph..
[41] Kostas Karpouzis,et al. Towards modeling embodied conversational agent character profiles using appraisal theory predictions in expression synthesis , 2009, Applied Intelligence.
[42] Stefan Kopp,et al. MURML: A Multimodal Utterance Representation Markup Language for Conversational Agents , 2002 .