Typicality-Based Visual Search Reranking

Most existing approaches to visual search reranking predominantly focus on mining information only from the initial ranking order on the basis of pseudo-relevance feedback. However, the initial ranking order cannot always provide enough cues for reranking by itself due to an unsatisfying visual search performance. This letter presents a novel approach to visual search reranking by selecting typical examples to build the reranking model. Observing that typical examples are mostly clearly visible, fill the majority of the visual documents or appear in one of several common poses, by using these examples informed classifiers would generally be more robust to noisy testing cases that may include occlusions, illumination changes or other factors. We first define the typicality on the basis of data distribution, and then theoretically formalize the example selection as an optimization problem on the basis of the example typicality and propose a close-form solution. Based on the selected examples, we build the reranking model by using a support vector machine. Empirically, we conduct extensive experiments on a real-world image set and a benchmark video set, and shows significant and consistent improvements over the state-of-the-art works.

[1]  Chih-Jen Lin,et al.  LIBSVM: A library for support vector machines , 2011, TIST.

[2]  Jaana Kekäläinen,et al.  IR evaluation methods for retrieving highly relevant documents , 2000, SIGIR '00.

[3]  Eleanor Rosch,et al.  Principles of Categorization , 1978 .

[4]  Shih-Fu Chang,et al.  Video search reranking through random walk over document-level context graph , 2007, ACM Multimedia.

[5]  Stefano Soatto,et al.  Filtering Internet image search results towards keyword based category recognition , 2008, 2008 IEEE Conference on Computer Vision and Pattern Recognition.

[6]  Xian-Sheng Hua,et al.  Beyond Accuracy: Typicality Ranking for Video Annotation , 2007, 2007 IEEE International Conference on Multimedia and Expo.

[7]  K. Sparck Jones,et al.  Simple, proven approaches to text retrieval , 1994 .

[8]  Meng Wang,et al.  MSRA-USTC-SJTU at TRECVID 2007: High-Level Feature Extraction and Search , 2007, TRECVID.

[9]  Tao Mei,et al.  Learning to video search rerank via pseudo preference feedback , 2008, 2008 IEEE International Conference on Multimedia and Expo.

[10]  Bernt Schiele,et al.  International Journal of Computer Vision manuscript No. (will be inserted by the editor) Semantic Modeling of Natural Scenes for Content-Based Image Retrieval , 2022 .

[11]  Tao Mei,et al.  Optimizing video search reranking via minimum incremental information loss , 2008, MIR '08.

[12]  Delbert Dueck,et al.  Clustering by Passing Messages Between Data Points , 2007, Science.

[13]  Milind R. Naphade,et al.  Learning the semantics of multimedia queries and concepts from a small number of examples , 2005, MULTIMEDIA '05.

[14]  Nicu Sebe,et al.  Content-based multimedia information retrieval: State of the art and challenges , 2006, TOMCCAP.

[15]  Rong Yan,et al.  Multimedia Search with Pseudo-relevance Feedback , 2003, CIVR.

[16]  Shih-Fu Chang,et al.  A reranking approach for context-based concept fusion in video indexing and retrieval , 2007, CIVR '07.

[17]  Shih-Fu Chang,et al.  Columbia University’s Baseline Detectors for 374 LSCOM Semantic Visual Concepts , 2007 .

[18]  Xian-Sheng Hua,et al.  Typicality ranking via semi-supervised multiple-instance learning , 2007, ACM Multimedia.

[19]  Paul Over,et al.  Evaluation campaigns and TRECVid , 2006, MIR '06.