Ontological query language for content based image retrieval

The paper discusses the design and implementation of the oquel query language for content based image retrieval. The retrieval process takes place entirely within the ontological domain defined by the syntax and semantics of the user query. Since the system does not rely on the pre-annotation of images with sentences in the language, the format of text queries is highly flexible. The language is also extensible to allow for the definition of higher level terms such as "cars", "people", "buildings", etc. on the basis of existing language constructs. Images are retrieved by deriving an abstract syntax tree form of a textual user query and probabilistically evaluating it by analysing the composition and perceptual properties of salient image regions in light of the query. The matching process utilises automatically extracted image segmentation and classification information and can incorporate any other feature extraction mechanisms or contextual knowledge available at processing time.