Virtual assistant: enhancing content acquisition by eliciting information from humans

In this paper, we propose the “Virtual Assistant,” a novel framework for supporting knowledge capturing in videos. The Virtual Assistant is an artificial agent that simulates a human assistant shown in TV programs and prompts users to provide feedback by asking questions. This framework ensures that sufficient information is provided in the captured content while users interact in a natural and enjoyable way with the agent. We developed a prototype agent based on a chatbot-like approach and applied it to a daily cooking scene. Experimental results demonstrate the potential of the Virtual Assistant framework, as it allows a person to provide feedback easily with few interruptions and elicits a variety of useful information.

[1]  Samir Kouro,et al.  Unidimensional Modulation Technique for Cascaded Multilevel Converters , 2009, IEEE Transactions on Industrial Electronics.

[2]  Shin'ichi Satoh,et al.  Multimedia Integration for Cooking Video Indexing , 2004, PCM.

[3]  Yuichi Ohta,et al.  Automated camerawork for capturing desktop presentations , 2005 .

[4]  Yukiko I. Nakano,et al.  Avatar's Gaze Control to Facilitate Conversational Turn-Taking in Virtual-Space Multi-user Voice Chat System , 2006, IVA.

[5]  Yoko Yamakata,et al.  Smart Kitchen: A User Centric Cooking Support System , 2008 .

[6]  Yasushi Nakauchi,et al.  Intelligent kitchen: cooking support by LCD and mobile robot with IC-labeled objects , 2005, 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[7]  Elizabeth D. Mynatt,et al.  COOK’S COLLAGE , 2005 .

[8]  Martin Rougvie,et al.  Dynamic Editing Methods for Interactively Adapting Cinematographic Styles , 2022 .

[9]  Emmanuel,et al.  Activity recognition in the home setting using simple and ubiquitous sensors , 2003 .

[10]  Pei-Yu Chi,et al.  Enabling nutrition-aware cooking in a smart kitchen , 2007, CHI Extended Abstracts.

[11]  Kent Larson,et al.  Activity Recognition in the Home Using Simple and Ubiquitous Sensors , 2004, Pervasive.

[12]  Tatsuya Yamazaki,et al.  Ubiquitous home: real-life testbed for home context-aware service , 2005, First International Conference on Testbeds and Research Infrastructures for the DEvelopment of NeTworks and COMmunities.

[13]  Hiroshi Ishii,et al.  ambientROOM: integrating ambient media with architectural space , 1998, CHI Conference Summary.

[14]  Elizabeth D. Mynatt,et al.  COOK'S COLLAGE Dkja vu Display for a Home Kitchen , 2005 .

[15]  Tetsuo Ono,et al.  Physical relation and expression: joint attention for human-robot interaction , 2003, IEEE Trans. Ind. Electron..

[16]  Claudio S. Pinhanez,et al.  Intelligent Studios Modeling Space and Action to Control TV Cameras , 1997, Appl. Artif. Intell..

[17]  Itiro Siio,et al.  Making recipes in the kitchen of the future , 2004, CHI EA '04.

[18]  Gregory D. Abowd,et al.  Living laboratories: the future computing environments group at the Georgia Institute of Technology , 2000, CHI Extended Abstracts.

[19]  Wendy Ju,et al.  CounterActive: an interactive cookbook for the kitchen counter , 2001, CHI Extended Abstracts.

[20]  Marc Davis Active capture: automatic direction for automatic movies , 2003, MULTIMEDIA '03.