Modeling the Dance Video Annotations

This paper presents a dance video content model (DVCM) to represent the semantics of the dance videos at multiple granularity levels. The DVCM is designed based on the concepts such as video, shot, segment, event and object, which are the components of MPEG- 7 MDS. This paper introduces a new relationship type called, Temporal Semantic Relationship to infer the semantic relationships between the dance video objects. Inverted file based index is created to reduce the search time of the dance video queries. An interactive query processor is designed using J2SE1.5 and JMF2.0 to perform the semantic search.

[1]  Lei Chen,et al.  Modeling Video Data for Content Based Queries: Extending the DISIMA Image Data Model , 2003, MMM.

[2]  Gwang S. Jung,et al.  Spatial knowledge representation and retrieval in 3-D image databases , 1995, Proceedings of the International Conference on Multimedia Computing and Systems.

[3]  Ankur Teredesai,et al.  VENUS: A System for Novelty Detection in Video Streams with Learning , 2004, FLAIRS.

[4]  Ann Hutchinson,et al.  Labanotation : the system for recording movement , 1954 .

[5]  James F. Allen Maintaining knowledge about temporal intervals , 1983, CACM.

[6]  Liu Huayong,et al.  The segmentation of news video into story units , 2005 .

[7]  Richard T. Snodgrass,et al.  Augmenting a conceptual model with geospatiotemporal annotations , 2004, IEEE Transactions on Knowledge and Data Engineering.

[8]  Rangasami L. Kashyap,et al.  Augmented Transition Network as a Semantic Model for Video Data , 2001 .

[9]  Venkat N. Gudivada,et al.  An algorithm for content-based retrieval in multimedia databases , 1996, Proceedings of the Third IEEE International Conference on Multimedia Computing and Systems.

[10]  Lei Chen,et al.  Modeling of video objects in a video databases , 2002, Proceedings. IEEE International Conference on Multimedia and Expo.

[11]  Svetha Venkatesh,et al.  Computational Media Aesthetics: Finding Meaning Beautiful , 2001, IEEE Multim..