Employing Virtual Humans for Interaction, Assistance and Information Provision in Ambient Intelligence Environments

This paper reports on the design, development and evaluation of a framework which implements virtual humans for information provision. The framework can be used to create interactive multimedia information visualizations e.g., images, text, audio, videos, 3D models and provides a dynamic data modeling mechanism for storage and retrieval and implements communication through multimodal interaction techniques. The interaction may involve human-to-agent, agent-to-environment or agent-to-agent communication. The framework supports alternative roles for the virtual agents who may act as assistants for existing systems, standalone "applications" or even as integral parts of emerging smart environments. Finally, an evaluation study was conducted with the participation of 10 people to study the developed system in terms of usability and effectiveness, when it is employed as an assisting mechanism for another application. The evaluation results were highly positive and promising, confirming the system's usability and encouraging further research in this area.

[1]  Chen Yu,et al.  Real-time adaptive behaviors in multimodal human-avatar interactions , 2010, ICMI-MLMI '10.

[2]  Constantine Stephanidis,et al.  3D Visualization and Multimodal Interaction with Temporal Information Using Timelines , 2013, INTERACT.

[3]  BaldassarriSandra,et al.  Chaos and Graphics , 2008 .

[4]  Antonis A. Argyros,et al.  Design and Development of Four Prototype Interactive Edutainment Exhibits for Museums , 2011, HCI.

[5]  Constantine Stephanidis,et al.  MAGIC: Developing a Multimedia Gallery Supporting mid-Air Gesture-Based Interaction and Control , 2013, HCI.

[6]  Constantine Stephanidis,et al.  HCI International 2013 - Posters’ Extended Abstracts , 2013, Communications in Computer and Information Science.

[7]  Ingemar J. Cox,et al.  IEEE Signal Processing Society , 2022, IEEE Journal of Selected Topics in Signal Processing.

[8]  Anton Leuski,et al.  Ada and Grace: Toward Realistic and Engaging Virtual Museum Guides , 2010, IVA.

[9]  Robin R. Murphy,et al.  Hand gesture recognition with depth images: A review , 2012, 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication.

[10]  Mark Weiser The computer for the 21st century , 1991 .

[11]  Michael Rohs,et al.  User-defined gestures for connecting mobile phones, public displays, and tabletops , 2010, Mobile HCI.

[12]  Claire Lutkewitte,et al.  Multimodal Composition: A Critical Sourcebook , 2013 .

[13]  Tadeusz Szkodny,et al.  Gesture Based Robot Control , 2012, ICCVG.

[14]  J. Cassell,et al.  Embodied conversational agents , 2000 .

[15]  Daniel Povey,et al.  The Kaldi Speech Recognition Toolkit , 2011 .

[16]  M. Weiser The Computer for the Twenty-First Century , 1991 .

[17]  Yang Li,et al.  User-defined motion gestures for mobile interaction , 2011, CHI.

[18]  Kanad K. Biswas,et al.  Gesture recognition using Microsoft Kinect® , 2011, The 5th International Conference on Automation, Robotics and Applications.

[19]  Anton Leuski,et al.  All Together Now - Introducing the Virtual Human Toolkit , 2013, IVA.

[20]  Constantine Stephanidis,et al.  Comparative Evaluation among Diverse Interaction Techniques in Three Dimensional Environments , 2013, HCI.

[21]  J. B. Brooke,et al.  SUS: A 'Quick and Dirty' Usability Scale , 1996 .

[22]  Francisco J. Serón,et al.  Maxine: A platform for embodied animated agents , 2008, Comput. Graph..