Design of an Interactive Humanoid Character for Multimodal Communication

Cognitive impact factor is one of the main measures for media-human interaction. In this study, we explain design and implementation of a semi-autonomous humanoid character display system that provides high cognitive impact on people. It combines marketing, entertainment, announcement, guidance and service sales on one singular device. As different from conventional automated teller machine (ATM) approach, interactive humanoid character (IHC) first detects the people around and targets starting the communication with them. A finite state machine model is used to determine type of possible requirement of the person in communication by navigating in pre-defined behavioral states due to interaction. This way of auto-controlled telepresence saves marketing, sales and service man-hours while it maximizes the media reach factor.

[1]  Zhigang Deng,et al.  Audio-based head motion synthesis for Avatar-based telepresence systems , 2004, ETP '04.

[2]  Trevor Darrell,et al.  Perceptually-driven Avatars and Interfaces: active methods for direct control , 1997 .

[3]  Daniel Thalmann,et al.  The Role of Virtual Humans in Virtual Environment Technology and Interfaces, In: Frontiers of Human-Centred Computing , 1999 .

[4]  Paul F. M. J. Verschure,et al.  Live Soundscape Composition Based on Synthetic Emotions , 2003, IEEE Multim..

[5]  Adam Cheyer,et al.  MVIEWS: multimodal tools for the video analyst , 1998, IUI '98.

[6]  Izak Benbasat,et al.  Comparing Customer Trust in Virtual Salespersons With Customer Trust in Human Salespersons , 2005, Proceedings of the 38th Annual Hawaii International Conference on System Sciences.

[7]  Alexander Zelinsky,et al.  Bimodal Active Stereo Vision , 2005, FSR.

[8]  Elizabeth F. Churchill,et al.  Animated autonomous personal representatives , 1998, AGENTS '98.

[9]  Rainer Stiefelhagen,et al.  Pointing gesture recognition based on 3D-tracking of face, hands and head orientation , 2003, ICMI '03.

[10]  Ronald A. Cole,et al.  Animating visible speech and facial expressions , 2004, The Visual Computer.

[11]  Michael J. Lyons,et al.  Designing, Playing, and Performing with a Vision-based Mouth Interface , 2003, NIME.

[12]  Jing Xiao,et al.  Vision-based control of 3D facial animation , 2003, SCA '03.

[13]  Joseph A. Paradiso,et al.  Optical Tracking for Music and Dance Performance , 1997 .

[14]  Thierry Pun,et al.  Design and Evaluation of Multimodal System for the Non-visual Exploration of Digital Pictures , 2003, INTERACT.

[15]  Gwen Littlewort,et al.  Real Time Face Detection and Facial Expression Recognition: Development and Applications to Human Computer Interaction. , 2003, 2003 Conference on Computer Vision and Pattern Recognition Workshop.