Automatic Video Annotation Framework Using Concept Detectors

Automatic video annotation has received a great deal of attention from researchers working on video retrieval. This study presents a novel automatic video annotation framework to enhance the annotation accuracy and reduce the processing time in large-scale video data by utilizing semantic concepts. The proposed framework consists of three main modules i.e., pre-processing, video analysis and annotation module. The framework support an efficient search and retrieval for any video content analysis and video archive applications. The experimental results on widely used TRECVID dataset using concepts of Columbia374 demonstrate the effectiveness of the proposed framework in assigning appropriate and semantically representative annotations for any new video.

[1]  G LoweDavid,et al.  Distinctive Image Features from Scale-Invariant Keypoints , 2004 .

[2]  Janko Calic,et al.  A rule-based video annotation system , 2004, IEEE Transactions on Circuits and Systems for Video Technology.

[3]  Meng Wang,et al.  Beyond Distance Measurement: Constructing Neighborhood Similarity for Video Annotation , 2009, IEEE Transactions on Multimedia.

[4]  Antonio Torralba,et al.  Modeling the Shape of the Scene: A Holistic Representation of the Spatial Envelope , 2001, International Journal of Computer Vision.

[5]  Guiguang Ding,et al.  Semantic classifier based on compressed sensing for image and video annotation , 2010 .

[6]  Lilly Suriani Affendey,et al.  An integrated semantic-based approach in concept based video retrieval , 2011, Multimedia Tools and Applications.

[7]  James Ze Wang,et al.  Real-time computerized annotation of pictures. , 2008, IEEE transactions on pattern analysis and machine intelligence.