Video content analysis is an active research domain due to the availability and the increment of audiovisual data in the digital format. There is a need to automatically extracting video content for efficient access, understanding,browsing and retrieval of videos. To obtain the information that is of interest and to provide better entertainment, tools are needed to help users extract relevant content and to effectively navigate through the large amount of available video information. Existing methods do not seem to attempt to model and estimate the semantic content of the video. Detecting and interpreting human presence,actions and activities is one of the most valuable functions in this proposed framework. The general objectives of this research are to analyze and process the audio-video streams to a robust audiovisual action recognition system by integrating, structuring and accessing multimodal information via multidimensional retrieval and extraction model. The proposed technique characterizes the action scenes by integrating cues obtained from both the audio and video tracks. Information is combined based on visual features (motion,edge, and visual characteristics of objects), audio features and video for recognizing action. This model uses HMM and GMM to provide a framework for fusing these features and to represent the multidimensional structure of the framework. The action-related visual cues are obtained by computing the spatio temporal dynamic activity from the video shots and by abstracting specific visual events. Simultaneously, the audio features are analyzed by locating and compute several sound effects of action events that embedded in the video. Finally, these audio and visual cues are combined to identify the action scenes. Compared with using single source of either visual or audio track alone, such combined audio visual information provides more reliable performance and allows us to understand the story content of movies in more detail. To compare the usefulness of the proposed framework, several experiments were conducted and the results were obtained by using visual features only (77.89% for precision;72.10% for recall), audio features only (62.52% for precision; 48.93% for recall)and combined audiovisual (90.35% for precision; 90.65% for recall).
[1]
Dragutin Petkovic,et al.
Query by Image and Video Content: The QBIC System
,
1995,
Computer.
[2]
Dariu Gavrila,et al.
The Visual Analysis of Human Movement: A Survey
,
1999,
Comput. Vis. Image Underst..
[3]
Wolfgang Effelsberg,et al.
Automatic recognition of film genres
,
1995,
MULTIMEDIA '95.
[4]
Shih-Fu Chang,et al.
A fully automated content-based video search engine supporting spatiotemporal queries
,
1998,
IEEE Trans. Circuits Syst. Video Technol..
[5]
J. K. Aggarwal,et al.
Tracking and recognizing two-person interactions in outdoor image sequences
,
2001,
Proceedings 2001 IEEE Workshop on Multi-Object Tracking.
[6]
Stephen W. Smoliar,et al.
Content based video indexing and retrieval
,
1994,
IEEE MultiMedia.
[7]
John S. Boreczky,et al.
A hidden Markov model framework for video segmentation using audio and image features
,
1998,
Proceedings of the 1998 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP '98 (Cat. No.98CH36181).
[8]
Junji Yamato,et al.
Recognizing human action in time-sequential images using hidden Markov model
,
1992,
Proceedings 1992 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.
[9]
Ramakant Nevatia,et al.
Representation and optimal recognition of human activities
,
2000,
Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No.PR00662).