Multi-modal extraction of highlights from TV Formula 1 programs

As amounts of publicly available video data grow, the need to automatically infer semantics from raw video data becomes significant. In this paper, we focus on the use of dynamic Bayesian networks (DBN) for that purpose, and demonstrate how they can be effectively applied for fusing the evidence obtained from different media information sources. The approach is validated in the particular domain of Formula I race videos. For that specific domain we introduce a robust audiovisual feature extraction scheme and a text recognition and detection method. Based on numerous experiments performed with DBN, we give some recommendations with respect to the modeling of temporal and atemporal dependences within the network. Finally, we present the experimental results for the detection of excited speech and the extraction of highlights, as well as the advantageous query capabilities of our system.

[1]  Tanveer F. Syeda-Mahmood,et al.  Detecting topical events in digital video , 2000, ACM Multimedia.

[2]  John Zimmerman,et al.  Integrated multimedia processing for topic segmentation and classification , 2001, Proceedings 2001 International Conference on Image Processing (Cat. No.01CH37205).

[3]  Vojkan Mihajlovic,et al.  Automatic Annotation of Formula 1 Races for Content-Based Video Retrieval , 2001 .

[4]  Xavier Boyen,et al.  Tractable Inference for Complex Stochastic Processes , 1998, UAI.

[5]  Willem Jonker,et al.  Content-based video retrieval by integrating spatio-temporal and stochastic recognition of events , 2001, Proceedings IEEE Workshop on Detection and Recognition of Events in Video.