Deep exploration for experiential image retrieval

Experiential image retrieval systems aim to provide the user with a natural and intuitive search experience. The goal is to empower the user to navigate large collections based on his own needs and preferences, while simultaneously providing him with an accurate sense of what the database has to offer. In this paper we integrate a new browsing mechanism called deep exploration with the proven technique of retrieval by relevance feedback. In our approach, relevance feedback focuses the search on relevant regions, while deep exploration facilitates transparent navigation to promising regions of feature space that would normally remain unreachable. Optimal feature weights are determined automatically based on the evidential support for the relevance of each single feature. To achieve efficient refinement of the search space, images are ranked and presented to the user based on their likelihood of being useful for further exploration.

[1]  B. S. Manjunath,et al.  MPEG‐7 Homogeneous Texture Descriptor , 2001 .

[2]  Francesco G. B. De Natale,et al.  Content-Based Image Retrieval by Feature Adaptation and Relevance Feedback , 2007, IEEE Transactions on Multimedia.

[3]  Gerard Salton,et al.  The SMART Retrieval System—Experiments in Automatic Document Processing , 1971 .

[4]  Sharad Mehrotra,et al.  Relevance feedback techniques in the MARS image retrieval system , 2003, Multimedia Systems.

[5]  J. J. Rocchio,et al.  Relevance feedback in information retrieval , 1971 .

[6]  Paul A. Viola,et al.  Boosting Image Retrieval , 2000, Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No.PR00662).

[7]  Yimin Wu,et al.  Interactive pattern analysis for relevance feedback in multimedia information retrieval , 2004, Multimedia Systems.

[8]  Shih-Fu Chang,et al.  Visual islands: intuitive browsing of visual search results , 2008, CIVR '08.

[9]  James Ze Wang,et al.  Image retrieval: Ideas, influences, and trends of the new age , 2008, CSUR.

[10]  Jiawei Han,et al.  Learning a Maximum Margin Subspace for Image Retrieval , 2008, IEEE Transactions on Knowledge and Data Engineering.

[11]  Cordelia Schmid,et al.  A sparse texture representation using local affine regions , 2005, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[12]  Ramesh C. Jain Experiential computing , 2003, CACM.

[13]  Marcel Worring,et al.  Similarity learning via dissimilarity space in CBIR , 2006, MIR '06.

[14]  Rong Jin,et al.  A unified log-based relevance feedback scheme for image retrieval , 2006 .

[15]  Jianping Fan,et al.  Mining Multilevel Image Semantics via Hierarchical Classification , 2008, IEEE Transactions on Multimedia.

[16]  Nicu Sebe,et al.  Content-based multimedia information retrieval: State of the art and challenges , 2006, TOMCCAP.

[17]  David A. Forsyth,et al.  Object Recognition as Machine Translation: Learning a Lexicon for a Fixed Image Vocabulary , 2002, ECCV.