The Fischlar-TRECVid-2004 system was developed for Dublin City University's participation in the 2004 TRECVid video information retrieval benchmarking activity. The system allows search and retrieval of video shots from over 60 hours of content. The shot retrieval engine employed is based on a combination of query text matched against spoken dialogue combined with image-image matching where a still image (sourced externally), or a keyframe (from within the video archive itself), is matched against all keyframes in the video archive. Three separate text retrieval engines are employed for closed caption text, automatic speech recognition and video OCR. Visual shot matching is primarily based on MPEG-7 low-level descriptors. The system supports relevance feedback at the shot level enabling augmentation and refinement using relevant shots located by the user. Two variants of the system were developed, one that supports both text- and image-based searching and one that supports image only search. A user evaluation experiment compared the use of the two systems. Results show that while the system combining text- and image-based searching achieves greater retrieval effectiveness, users make more varied and extensive queries with the image only based searching version
[1]
Alan F. Smeaton,et al.
Fischlár @ TRECVID2003: system description
,
2004,
MULTIMEDIA '04.
[2]
Alan F. Smeaton,et al.
The Físchlár-News-Stories System: Personalised Access to an Archive of TV News
,
2004,
RIAO.
[3]
Noel E. O'Connor,et al.
The acetoolbox: low-level audiovisual feature extraction for retrieval and classification
,
2005
.
[4]
Alan F. Smeaton,et al.
Designing the User Interface for the Físchlár Digital Video Library
,
2006,
J. Digit. Inf..
[5]
Stephen E. Robertson,et al.
Okapi at TREC-3
,
1994,
TREC.
[6]
Jean-Luc Gauvain,et al.
The LIMSI Broadcast News transcription system
,
2002,
Speech Commun..
[7]
B. S. Manjunath,et al.
Color and texture descriptors
,
2001,
IEEE Trans. Circuits Syst. Video Technol..