T-IRS: textual query based image retrieval system for consumer photos
暂无分享,去创建一个
In this demonstration, we present a (quasi) real-time textual query based image retrieval system (T-IRS) for consumer photos by leveraging millions of web images and their associated rich textual descriptions (captions, categories, etc.). After a user provides a textual query (e.g., "boat"), our system automatically finds the positive web images that are related to the textual query "boat" as well as the negative web images which are irrelevant to the textual query. Based on these automatically retrieved positive and negative web images, we employ the decision stump ensemble classifier to rank personal consumer photos. To further improve the photo retrieval performance, we also develop a novel relevance feedback method, referred to as Cross-Domain Regularized Regression (CDRR), which effectively utilizes both the web images and the consumer images. Our system is inherently not limited by any predefined lexicon.
[1] Christiane Fellbaum,et al. Book Reviews: WordNet: An Electronic Lexical Database , 1999, CL.
[2] Ivor W. Tsang,et al. Using large-scale web data to facilitate textual query based retrieval of consumer photos , 2009, MM '09.
[3] Wei-Ying Ma,et al. Annotating Images by Mining Image Search Results , 2008, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[4] James Ze Wang,et al. Image retrieval: Ideas, influences, and trends of the new age , 2008, CSUR.