Optimizing multimodal reranking for web image search

In this poster, we introduce a web image search reranking approach with exploring multiple modalities. Diff erent from the conventional methods that build graph with one feature set for reranking, our approach integrates multiple feature sets that describe visual content from different aspects. We simultaneously integrate the learning of relevance scores, the weighting of different feature sets, the distance metric and the scaling for each feature set into a unified scheme. Experimental results on a large data set that contains more than 1,100 queries and 1 million images demonstrate the effectiveness of our approach.

[1]  Meng Wang,et al.  Unified Video Annotation via Multigraph Learning , 2009, IEEE Transactions on Circuits and Systems for Video Technology.

[2]  Xian-Sheng Hua,et al.  Towards a Relevant and Diverse Search of Social Images , 2010, IEEE Transactions on Multimedia.

[3]  References , 1971 .

[4]  Meng Wang,et al.  Beyond Distance Measurement: Constructing Neighborhood Similarity for Video Annotation , 2009, IEEE Transactions on Multimedia.

[5]  Shih-Fu Chang,et al.  Video search reranking through random walk over document-level context graph , 2007, ACM Multimedia.

[6]  Xian-Sheng Hua,et al.  Bayesian video search reranking , 2008, ACM Multimedia.

[7]  Meng Wang,et al.  MSRA-MM 2.0: A Large-Scale Web Multimedia Dataset , 2009, 2009 IEEE International Conference on Data Mining Workshops.