Video event understanding through building knowledge domain

A novel method for video event analysis and description based on the knowledge domain has been put forward in this paper. Semantic concepts in the context of the video event are described in one specific domain enriched with qualitative attributes of the semantic objects, multimedia processing approaches and domain independent factors: low level features (pixel color, motion vectors and spatio-temporal relationship). In this work, we consider one shot (episode) in the billiard game of video as the specific domain to explain the process of video event detection. In addition, our another main contribution is exploiting the video object ontology to map the MPEG-7's high-level descriptors to tow level features descriptors which have been defined in the MPEG's logical structure.

[1]  Ferran Marqués,et al.  Region-based representations of image and video: segmentation tools for multimedia services , 1999, IEEE Trans. Circuits Syst. Video Technol..

[2]  Philippe Martin,et al.  Knowledge Retrieval and the World Wide Web , 2000, IEEE Intell. Syst..

[3]  Boon-Lock Yeo,et al.  Rapid scene analysis on compressed video , 1995, IEEE Trans. Circuits Syst. Video Technol..

[4]  Bob J. Wielinga,et al.  Ontology-Based Photo Annotation , 2001, IEEE Intell. Syst..

[5]  David Sinclair,et al.  A Self-Referential Perceptual Inference Framework for Video Interpretation , 2003, ICVS.

[6]  Shih-Fu Chang,et al.  Overview of the MPEG-7 standard , 2001, IEEE Trans. Circuits Syst. Video Technol..

[7]  Thomas Sikora,et al.  The MPEG-7 visual standard for content description-an overview , 2001, IEEE Trans. Circuits Syst. Video Technol..

[8]  Jianping Fan,et al.  Adaptive motion-compensated video coding scheme towards content-based bit rate allocation , 2000, J. Electronic Imaging.