Evaluation campaigns and TRECVid
The TREC Video Retrieval Evaluation (TRECVid)is an international benchmarking activity to encourage research in video information retrieval by providing a large test collection, uniform scoring procedures, and a forum for organizations 1 interested in comparing their results. TRECVid completed its fifth annual cycle at the end of 2005 and in 2006 TRECVid will involve almost 70 research organizations, universities and other consortia. Throughout its existence, TRECVid has benchmarked both interactive and automatic/manual searching for shots from within a video corpus,automatic detection of a variety of semantic and low-level video features, shot boundary detection and the detection of story boundaries in broadcast TV news. This paper will give an introduction to information retrieval (IR) evaluation from both a user and a system perspective, high-lighting that system evaluation is by far the most prevalent type of evaluation carried out. We also include a summary of TRECVid as an example of a system evaluation bench-marking campaign and this allows us to discuss whether such campaigns are a good thing or a bad thing. There are arguments for and against these campaigns and we present some of them in the paper concluding that on balance they have had a very positive impact on research progress.
TRECVID: evaluating the effectiveness of information retrieval tasks on digital video
TRECVID is an annual exercise which encourages research in information retrieval from digital video by providing a large video test collection, uniform scoring procedures, and a forum for organizations interested in comparing their results. TRECVID benchmarking covers both interactive and manual searching by end users, as well as the benchmarking of some supporting technologies including shot boundary detection, extraction of some semantic features, and the automatic segmentation of TV news broadcasts into non-overlapping news stories. TRECVID has a broad range of over 40 participating groups from across the world and as it is now (2004) in its 4th annual cycle it is opportune to stand back and look at the lessons we have learned from the cumulative activity. In this paper we shall present a brief and high-level overview of the TRECVID activity covering the data, the benchmarked tasks, the overall results obtained by groups to date and an overview of the approaches taken by selective groups in some tasks. While progress from one year to the next cannot be measured directly because of the changing nature of the video data we have been using, we shall present a summary of the lessons we have learned from TRECVID and include some pointers on what we feel are the most important of these lessons.
TRECVID: Benchmarking the Effectivenss of Information Retrieval Tasks on Digital Video
Many research groups worldwide are now investigating techniques which can support information retrieval on archives of digital video and as groups move on to implement these techniques they inevitably try to evaluate the performance of their techniques in practical situations. The difficulty with doing this is that there is no test collection or any environment in which the effectiveness of video IR or video IR sub-tasks, can be evaluated and compared. The annual series of TREC exercises has, for over a decade, been benchmarking the effectiveness of systems in carrying out various information retrieval tasks on text and audio and has contributed to a huge improvement in many of these. Two years ago, a track was introduced which covers shot boundary detection, feature extraction and searching through archives of digital video. In this paper we present a summary of the activities in the TREC Video track in 2002 where 17 teams from across the world took part.
neural network sensor network wireless sensor network wireless sensor deep learning comparative study base station information retrieval feature extraction sensor node programming language cellular network random field digital video number theory rate control network lifetime river basin hyperspectral imaging distributed algorithm chemical reaction carnegie mellon university fly ash visual feature boundary detection video retrieval diabetes mellitu semantic indexing oryza sativa water storage user association efficient wireles shot boundary shot boundary detection data assimilation system retrieval task controlled trial terrestrial television video search gps network sensor network consist efficient wireless sensor information retrieval task concept detection video captioning retrieval evaluation rice seed safety equipment endangered species station operation case study involving dublin city university high-level feature seed germination brown coal high plain study involving structure recognition climate experiment gravity recovery table structure land data assimilation instance search combinatorial number randomised controlled trial recovery and climate randomised controlled combinatorial number theory adult male high-level feature extraction complete proof music perception robust computation optimization-based method perception and cognition global land datum social perception terrestrial water storage trec video retrieval terrestrial water object-oriented conceptual video retrieval evaluation trec video seed variety base station operation table structure recognition transgenic rice concept detector total water storage groundwater storage regional gp grace gravity randomized distributed algorithm ibm tivoli workload scheduler cerebrovascular accident case study united state