Coupling Visual Semantics and High-Level Relational Characterization within a Transparent and Penetrable Image Retrieval Framework

We propose to enhance the performance of the S.I.R. image indexing and retrieval architecture [1,2] through the integration of a query-by-example (QBE) framework based on high-level image descriptions instead of their extracted low-level features. This framework features a bi-facetted conceptual model coupling visual semantics and relational spatial characterization and operates on image objects (abstractions of visual entities) in an attempt to perform querying operations beyond state-of-the-art relevance feedback (RF) frameworks. Also, it manipulates a rich query language consisting of several boolean operators, which therefore leads to optimized user interaction and increased retrieval performance.