Simulated Testing of an Adaptive Multimedia Information Retrieval System

The Semantic Gap is considered to be a bottleneck in image and video retrieval. One way to increase the communication between user and system is to take advantage of the user's action with a system, e.g. to infer the relevance or otherwise of a video shot viewed by the user. In this paper we introduce a novel video retrieval system and propose a model of implicit information for interpreting the user's actions with the interface. The assumptions on which this model was created are then analysed in an experiment using simulated users based on relevance judgements to compare results of explicit and implicit retrieval cycles. Our model seems to enhance retrieval results. Results are presented and discussed in the final section.

[1]  Alan F. Smeaton,et al.  TRECVID 2004 Experiments in Dublin City University , 2004, TRECVID.

[2]  Ryen W. White,et al.  A Simulated Study of Implicit Feedback Models , 2004, ECIR.

[3]  Joemon M. Jose,et al.  Glasgow University at TRECVid 2006 , 2006, TRECVID.

[4]  Jaime Teevan,et al.  Implicit feedback for inferring user preference: a bibliography , 2003, SIGF.

[5]  Alan F. Smeaton,et al.  TRECVid 2006 Experiments at Dublin City University , 2012, TRECVID.

[6]  Michael G. Christel,et al.  Addressing the challenge of visual information access from digital image and video libraries , 2005, Proceedings of the 5th ACM/IEEE-CS Joint Conference on Digital Libraries (JCDL '05).

[7]  Joemon M. Jose,et al.  Evidence combination for multi-point query learning in content-based image retrieval , 2004, IEEE Sixth International Symposium on Multimedia Software Engineering.

[8]  Ziyou Xiong,et al.  Speeding up relevance feedback in image retrieval with triangle-inequality based algorithms , 2002, 2002 IEEE International Conference on Acoustics, Speech, and Signal Processing.

[9]  Mark Claypool,et al.  Implicit interest indicators , 2001, IUI '01.

[10]  Yasuhiro Yamamoto,et al.  The Landscape of Time-Based Visual Presentation Primitives for Richer Video Experience , 2005, INTERACT.

[11]  João Magalhães,et al.  Imperial College at TRECVID , 2005, TRECVID.

[12]  Jun Yang,et al.  CMU Informedia's TRECVID 2005 Skirmishes , 2005, TRECVID.

[13]  Craig MacDonald,et al.  Terrier Information Retrieval Platform , 2005, ECIR.

[14]  Wei-Hao Lin,et al.  Confounded Expectations: Informedia at TRECVID 2004 , 2004, TRECVID.

[15]  Joemon M. Jose,et al.  Adapting To Evolving Needs: Evaluating A Behaviour-Based Search Interface , 2003 .

[16]  Dragutin Petkovic,et al.  Content-Based Representation and Retrieval of Visual Media: A State-of-the-Art Review , 1996 .

[17]  Peter G. B. Enser,et al.  Retrieval of Archival Moving Imagery - CBIR Outside the Frame? , 2002, CIVR.

[18]  João Magalhães,et al.  Video Retrieval Using Search and Browsing , 2004, TRECVID.