Mining Relationship Between Video Concepts using Probabilistic Graphical Models

For large scale automatic semantic video characterization, it is necessary to learn and model a large number of semantic concepts. These semantic concepts do not exist in isolation to each other and exploiting this relationship between multiple video concepts could be a useful source to improve the concept detection accuracy. In this paper, we describe various multi-concept relational learning approaches via a unified probabilistic graphical model representation and propose using numerous graphical models to mine the relationship between video concepts that have not been applied before. Their performances in video semantic concept detection are evaluated and compared on two TRECVID'05 video collections