Analysing the performance of visual, concept and text features in content-based video retrieval

This paper describes revised content-based search experiments in the context of TRECVID 2003 benchmark. Experiments focus on measuring content-based video retrieval performance with following search cues: visual features, semantic concepts and text. The fusion of features uses weights and similarity ranks. Visual similarity is computed using Temporal Gradient Correlogram and Temporal Color Correlogram features that are extracted from the dynamic content of a video shot. Automatic speech recognition transcripts and concept detectors enable higher-level semantic searching. 60 hours of news videos from TRECVID 2003 search task were used in the experiments. System performance was evaluated with 25 pre-defined search topics using average precision. In visual search, multiple examples improved the results over single example search. Weighted fusion of text, concept and visual features improved the performance over text search baseline. Expanded query term list of text queries gave also notable increase in performance over the baseline text search

[1]  Mika Rautiainen,et al.  Detecting Semantic Concepts from Video Using Temporal Gradients and Audio Classification , 2003, CIVR.

[2]  Milind R. Naphade,et al.  Semantic video indexing using a probabilistic framework , 2000, Proceedings 15th International Conference on Pattern Recognition. ICPR-2000.

[3]  Brendan J. Frey,et al.  Probabilistic multimedia objects (multijects): a novel approach to video indexing and retrieval in multimedia systems , 1998, Proceedings 1998 International Conference on Image Processing. ICIP98 (Cat. No.98CB36269).

[4]  Jean-Luc Gauvain,et al.  The LIMSI Broadcast News transcription system , 2002, Speech Commun..

[5]  B. S. Manjunath,et al.  Introduction to mpeg-7 , 2002 .

[6]  Sargur N. Srihari,et al.  Decision Combination in Multiple Classifier Systems , 1994, IEEE Trans. Pattern Anal. Mach. Intell..

[7]  Jing Huang,et al.  Image indexing using color correlograms , 1997, Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[8]  Amarnath Gupta,et al.  Virage video engine , 1997, Electronic Imaging.

[9]  Alan F. Smeaton,et al.  Experiences of creating four video library collections with the Físchlár system , 2004, International Journal on Digital Libraries.

[10]  Yihong Gong,et al.  Lessons Learned from Building a Terabyte Digital Video Library , 1999, Computer.

[11]  Timo Ojala,et al.  TRECVID 2003 Experiments at Media Team Oulu and VTT , 2003, TRECVID.

[12]  Alberto Del Bimbo,et al.  Expressive Semantics for Automatic Annotation and Retrieval of Video Streams , 2000, IEEE International Conference on Multimedia and Expo.

[13]  Gerard Salton,et al.  On the Specification of Term Values in Automatic Indexing , 1973 .

[14]  Timo Ojala,et al.  Cluster-temporal browsing of large news video databases , 2004, 2004 IEEE International Conference on Multimedia and Expo (ICME) (IEEE Cat. No.04TH8763).

[15]  John R. Smith,et al.  Semantic Indexing of Multimedia Content Using Visual, Audio, and Text Cues , 2003, EURASIP J. Adv. Signal Process..

[16]  Mika Rautiainen,et al.  Temporal color correlograms for video retrieval , 2002, Object recognition supported by user interaction for service robots.

[17]  B. S. Manjunath,et al.  Introduction to MPEG-7: Multimedia Content Description Interface , 2002 .

[18]  Dragutin Petkovic,et al.  Query by Image and Video Content: The QBIC System , 1995, Computer.