IBM multimodal interactive video threading
暂无分享,去创建一个
In this demo we present a novel approach for (a) automatic labeling and grouping of multimedia content using existing metadata and semantic concepts, and (b) interactive context driven tagging of clusters of multimedia content. Proposed system leverages existing metadata info in conjunction with automatically assigned semantic descriptors. One of the challenges of multimedia retrieval systems today is to organize and present the video data in such a way that allows the user to most efficiently navigate the rich index space. The information needs of users typically span a range of semantic concepts, associated metadata, and content similarity. We propose to jointly analyze and navigate metadata, semantic and visual space for the purpose of identifying new relationships among content, and allowing user to link the aggregated content to a complex event description. The advantages of the proposed system are realized in increased ability to target content delivery to users such as in the collaborative or multi-domain user environments.
[1] Rong Yan,et al. IBM multimedia analysis and retrieval system , 2008, CIVR '08.
[2] John R. Smith,et al. Semantic Labeling of Multimedia Content Clusters , 2006, 2006 IEEE International Conference on Multimedia and Expo.
[3] Jelena Tesic. Metadata Practices for Consumer Photos , 2005, IEEE Multim..
[4] John R. Smith,et al. Large-scale concept ontology for multimedia , 2006, IEEE MultiMedia.