Attention-driven action retrieval with DTW-based 3d descriptor matching

From visual perception viewpoint, actions in videos can capture high-level semantics for video content understanding and retrieval. However, action-level video retrieval meets great challenges, due to the interferences from global motions or concurrent actions, and the difficulties in robust action describing and matching. This paper presents a content-based action retrieval framework to enable effective search of near-duplicated actions in large-scale video database. Firstly, we present an attention shift model to distill and partition human-concerned saliency actions from global motions and concurrent actions. Secondly, to characterize each saliency action, we extract 3D-SIFT descriptor within its spatial-temporal region, which is robust against rotation, scale, and view point variances. Finally, action similarity is measured using Dynamic Time Warping (DTW) distance to offer tolerance for action duration variance and partial motion missing. Search efficiency in large-scale dataset is achieved by hierarchical descriptor indexing and approximate nearest-neighbor search. In validation, we present a prototype system VILAR to facilitate action search within "Friends" soap operas with excellent accuracy, efficiency, and human perception revealing ability.

[1]  Mubarak Shah,et al.  Visual attention detection in video sequences using spatiotemporal cues , 2006, MM '06.

[2]  Gerard Salton,et al.  Term-Weighting Approaches in Automatic Text Retrieval , 1988, Inf. Process. Manag..

[3]  Wen-Huang Cheng,et al.  A Visual Attention Based Region-of-Interest Determination Framework for Video Sequences , 2005, IEICE Trans. Inf. Syst..

[4]  Shan Li,et al.  Efficient spatiotemporal-attention-driven shot matching , 2007, ACM Multimedia.

[5]  Ilaria Bartolini,et al.  WARP: accurate retrieval of shapes using phase of Fourier descriptors and time warping distance , 2005, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[6]  Christof Koch,et al.  A Model of Saliency-Based Visual Attention for Rapid Scene Analysis , 2009 .

[7]  Timothy K. Shih,et al.  On automatic actions retrieval of martial arts , 2004, 2004 IEEE International Conference on Multimedia and Expo (ICME) (IEEE Cat. No.04TH8763).

[8]  David Nistér,et al.  Scalable Recognition with a Vocabulary Tree , 2006, 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06).

[9]  Richard Szeliski,et al.  City-Scale Location Recognition , 2007, 2007 IEEE Conference on Computer Vision and Pattern Recognition.

[10]  Mubarak Shah,et al.  A 3-dimensional sift descriptor and its application to action recognition , 2007, ACM Multimedia.

[11]  Bo Zhang,et al.  A unified shot boundary detection framework based on graph partition model , 2005, MULTIMEDIA '05.

[12]  Donald J. Berndt,et al.  Using Dynamic Time Warping to Find Patterns in Time Series , 1994, KDD Workshop.