Perceived Similarity and Visual Descriptions in Content-Based Image Retrieval

The use of low-level feature descriptors is pervasive in content-based image retrieval tasks and the answer to the question of how well these features describe users' inten- tion is inconclusive. In this paper we devise experiments to gauge the degree of alignment between the description of target images by humans and that implicitly provided by low-level image feature descriptors. Data was collected on how humans perceive similarity in images. Using images judged by humans to be similar, as ground truth, the per- formance of some MPEG-7 visual feature descriptors were evaluated. It is found that various descriptors play different roles in different queries and their appropriate combination can improve the performance of retrieval tasks. This forms a basis for the development of adaptive weight assignment to features depending on the query and retrieval task.

[1]  Yanchun Zhang,et al.  An overview of content-based image retrieval techniques , 2004, 18th International Conference on Advanced Information Networking and Applications, 2004. AINA 2004..

[2]  Thomas Sikora,et al.  The MPEG-7 visual standard for content description-an overview , 2001, IEEE Trans. Circuits Syst. Video Technol..

[3]  Yuan Zhong,et al.  A weighting scheme for content-based image retrieval , 2007 .

[4]  S. Hochstein,et al.  View from the Top Hierarchies and Reverse Hierarchies in the Visual System , 2002, Neuron.

[5]  Jackie Andrade,et al.  BIOS Instant Notes in Cognitive Psychology , 2004 .

[6]  A. Treisman Features and Objects: The Fourteenth Bartlett Memorial Lecture , 1988, The Quarterly journal of experimental psychology. A, Human experimental psychology.

[7]  Wanqing Li,et al.  Perceived Similarity and Visual Descriptions in Content-Based Image Retrieval , 2007, ISM 2007.