Keyframe-Based Video Summary Using Visual Attention Clues

A visual attention index descriptor based on a visual attention model bridges the semantic gap between low-level descriptors used by computers and high-level concepts perceived by humans.

[1]  Ioannis Pitas,et al.  Information theory-based shot cut/fade detection and video summarization , 2006, IEEE Transactions on Circuits and Systems for Video Technology.

[2]  Lie Lu,et al.  A generic framework of user attention model and its application in video summarization , 2005, IEEE Trans. Multim..

[3]  Nuno Vasconcelos,et al.  On the plausibility of the discriminant center-surround hypothesis for visual saliency. , 2008, Journal of vision.

[4]  J. Henderson,et al.  Initial scene representations facilitate eye movement guidance in visual search. , 2007, Journal of experimental psychology. Human perception and performance.

[5]  Zhaoping Li,et al.  Feature-specific interactions in salience from combined feature contrasts: evidence for a bottom-up saliency map in V1. , 2007, Journal of vision.

[6]  Nathalie Guyader,et al.  Spatio-temporal attention model for video content analysis , 2005, IEEE International Conference on Image Processing 2005.

[7]  Derrick J. Parkhurst,et al.  Scene content selected by active vision. , 2003, Spatial vision.

[8]  V. Alekseev Estimation of a probability density function and its derivatives , 1972 .

[9]  SangKeun Lee,et al.  Properties of the singular value decomposition for efficient data clustering , 2004, IEEE Signal Processing Letters.

[10]  Thomas S. Huang,et al.  Exploring video structure beyond the shots , 1998, Proceedings. IEEE International Conference on Multimedia Computing and Systems (Cat. No.98TB100241).

[11]  Andrew W. Moore,et al.  X-means: Extending K-means with Efficient Estimation of the Number of Clusters , 2000, ICML.