Modeling user subjectivity in image libraries

In addition to the problem of which image analysis models to use in digital libraries, e.g. wavelet, Wold, color histograms, is the problem of how to combine these models with their different strengths. Most present systems place the burden of combination on the user, e.g. the user specifies 50% texture features, 20% color features, etc. This is a problem since most users do not know how to best pick the settings for the given data and search problem. The paper addresses this problem, describing research in progress for a system that: (1) automatically infers which combination of models best represents the data of interest to the user; and (2) learns continuously during interaction with each user. In particular, these two components-inference and learning-provide a solution that adapts to the subjective and hard to predict behaviors frequently seen when people query or browse image libraries.