Lazy Learning Based Efficient Video Annotation

Eager learning methods, such as SVM, are widely applied in video annotation task for their substantial performance. However, their computational costs are usually prohibitive when a large dataset is faced, especially when annotating a large lexicon of semantic concepts. This paper proposes a video annotation scheme based on lazy learning, and shows that this scheme is much more computationally efficient and flexible. Based on a recently proposed improved Parzen window method, we provide a lazy learning based video annotation scheme. After building the pairwise relationships in dataset, the annotation can be finished rapidly for each concept. Experiments show that the proposed method is much more efficient than SVM while retaining comparable performance.