Concept representation based video indexing

This poster introduces a novel concept-based video indexing approach. It is developed based on a rich set of base concepts, of which the models are available. Then, for a given concept with several labeled samples, we combine the base concepts to fit it and its model can thus be obtained accordingly. Empirical results demonstrate that this method can achieve great performance even with very limited labeled data. We have compared different representation approaches including both sparse and non-sparse methods. Our conclusion is that the sparse method will lead to much better performance.

[1]  R. Tibshirani Regression Shrinkage and Selection via the Lasso , 1996 .

[2]  John R. Smith,et al.  Multimedia semantic indexing using model vectors , 2003, 2003 International Conference on Multimedia and Expo. ICME '03. Proceedings (Cat. No.03TH8698).

[3]  Tao Mei,et al.  Correlative multi-label video annotation , 2007, ACM Multimedia.

[4]  Wei-Hao Lin,et al.  Confounded Expectations: Informedia at TRECVID 2004 , 2004, TRECVID.

[5]  Shih-Fu Chang,et al.  Columbia University’s Baseline Detectors for 374 LSCOM Semantic Visual Concepts , 2007 .