Video annotation based on temporally consistent Gaussian random field
暂无分享,去创建一个
A novel method for automatically annotating video semantics, called temporally consistent Gaussian random field (TCGRF) is proposed. Since the temporally adjacent video segments (e.g. shots) usually have a similar semantic concept, TCGRF adapts the temporal consistency property of video data into graph-based semi-supervised learning to improve the annotation results. Experiments conducted on the TRECVID data set have demonstrated its effectiveness
[1] Alexander Zien,et al. Semi-Supervised Learning , 2006 .