Query-independent learning for video search

Most of existing learning-based methods for query-by-example take the query examples as ldquopositiverdquo and build a model for each query. These methods, referred to as query-dependent, only achieved limited success as they can hardly be applied to real-world applications, in which an arbitrary query is usually given. To address this problem, we propose to learn a query-independent model by exploiting the relevance information which exists in the pair of query-document. The proposed approach takes a query-document pair as a sample and extracts a set of query-independent textual and visual features from each pair. It is general and suitable for a real-world video search system since the learned relevance relation is independent on any query. We conducted extensive experiments over TRECVID 2005-2007 corpus and shown superior performance (+37% in Mean Average Precision) to the query-dependent learning approaches.