Applying Semantic Association To Support Content-Based Video Retrieval

The traditional approach to video retrieval is to rst annotate the video by textual information (titles and key words) and then the queries will be searched based on this keyword set. Since automatic annotation has not yet been available, this work needs great amount of labor and has been proved to be unrealistic in applications. Another approach, which seems to be at the other extreme, is to utilize the low-level video content, such as color, texture, shape, motion features and so on, in an attempt to get rid of the need of key words annotation. We hold the view in this paper that a user preferable query form should include both the keywords and video contents. In this paper, we will explore the semantic aspect based on video TOC structuring [1]. Closecaptioning is used to extract a basic keywords set. WordNet, an electronic lexical system, is used to provide semantic association. The approach has been applied in Web-MARS VIR and the running result has shown that the retrieval performance is greatly improved.