We propose novel algorithms for organizing large image and video datasets using both the visual content and the associated side-information, such as time, location, authorship, and so on. Earlier research have used side-information as pre-filter before visual analysis is performed, and we design a machine learning algorithm to model the join statistics of the content and the side information. Our algorithm, Diverse-Density Contextual Clustering (D2C2), starts by finding unique patterns for each sub-collection sharing the same side-info, e.g., scenes from winter. It then finds the common patterns that are shared among all subsets, e.g., persistent scenes across all seasons. These unique and common prototypes are found with Multiple Instance Learning and subsequent clustering steps. We evaluate D2C2 on two web photo collections from Flickr and one news video collection from TRECVID. Results show that not only the visual patterns found by D2C2 are intuitively salient across different seasons, locations and events, classifiers constructed from the unique and common patterns also outperform state-of-the-art bag-of-features classifiers.
[1]
Masashi Morimoto,et al.
Visual pattern discovery using web images
,
2006,
MIR '06.
[2]
Tomás Lozano-Pérez,et al.
A Framework for Multiple-Instance Learning
,
1997,
NIPS.
[3]
Mor Naaman,et al.
Generating diverse and representative image search results for landmarks
,
2008,
WWW.
[4]
Yixin Chen,et al.
Image Categorization by Learning and Reasoning with Regions
,
2004,
J. Mach. Learn. Res..
[5]
Qionghai Dai,et al.
Similarity-based online feature selection in content-based image retrieval
,
2006,
IEEE Transactions on Image Processing.
[6]
Michael I. Jordan,et al.
On Spectral Clustering: Analysis and an algorithm
,
2001,
NIPS.