The recent literature has shown that the principal difficulty in multimedia retrieval is the bridging of the semantic gap between the user's wishes and his ability to formulate queries. This insight has spawned two main directions of research Query By Example (QBE) with relevance feedback (i.e. learning to improve the result of a previously formulated query) and the research in query formulation techniques, like browsing or query by sketch. Browsing techniques try to help the user in finding his target image, or an image which is sufficiently close to the desired result that it can be used in a subsequent QBE query. From the feature space viewpoint, each browsing system tries to permit the user to move consciously in feature space and eventually reach the target image. How to provide this functionality to the user is presently an open question. In fact even obtaining objective performance evaluation and comparison of these browsing paradigms is difficult. We distinguish between deterministic browsers, which try to optimize the possibility for the user to learn how the system behaves, and stochastic browsers based on more sophisticated Monte-Carlo algorithms thus sacrificing reproducibility to a better performance. Presently, these two browsing paradigms are practically incomparable, except by large scale user studies. This makes it infeasible for research groups to evaluate incremental improvement of browsing schemes. Moreover, automated benchmarks in the current literature simulate a user by a model derived directly from the distance measures used within the tested systems. Such a circular reference cannot provide a serious alternative to real user tests. In this paper, we present an automatic benchmark which uses user- annotated collections for simulating the semantic gap, thus providing a means for automatic evaluation and comparison of the different browsing paradigms. We use a very precise annotation of few words together with a thesaurus to provide sufficiently smooth behavior of the annotation-based user model. We discuss the design and evaluation of this annotation as well as the implementation of the benchmark in n MRML-compliant script with pluggable modules which allow testing of new interaction schemes (Multimedia Retrieval Markup Language).
[1]
Ingemar J. Cox,et al.
Psychophysical studies of the performance of an image database retrieval system
,
1998,
Electronic Imaging.
[2]
Thierry Pun,et al.
Performance evaluation in content-based image retrieval: overview and proposals
,
2001,
Pattern Recognit. Lett..
[3]
Chahab Nastar.
The Image Shape Spectrum for Image Retrieval
,
1997
.
[4]
Iadh Ounis.
A Flexible Weighting Scheme for Multimedia Documents
,
1999,
DEXA.
[5]
Ingemar J. Cox,et al.
Target testing and the PicHunter Bayesian multimedia retrieval system
,
1996,
Proceedings of the Third Forum on Research and Technology Advances in Digital Libraries,.
[6]
Iadh Ounis,et al.
Modeling, Indexing and Retrieving Images Using Conceptual Graphs
,
1998,
DEXA.
[7]
Thierry Pun,et al.
Strategies for positive and negative relevance feedback in image retrieval
,
2000,
Proceedings 15th International Conference on Pattern Recognition. ICPR-2000.