Panoramic, adaptive and reconfigurable interface for similarity search

We outline the architecture of a content-based image retrieval (CBIR)-interface that offers the user a graphical tool to create new features by showing (as opposed to telling!) the system what he means. It allows him to interactively classify images by dragging and dropping them into different piles and instructing the interface to come up with features that can mimic this classification. We show how logistic regression and Sammon projection can be used to supervise this search mode.