In this paper, we propose a dynamic attentive system for detecting the most salient regions of interest in omnidirectional video. The spot selection is based on computer modeling of dynamic visual attention. In order to operate on video sequences, the process encompasses the multiscale contrast detection of static and motion information, as well as fusion of the information in a scalar map called saliency map. The processing is performed in spherical geometry. While the static contribution collected in the static saliency map relies on our previous work, we propose a novel motion model based on block matching algorithm computed on the sphere. A spherical motion field pyramid is first estimated from two consecutive omnidirectional images by varying the block size. This latter constitutes the input of the motion model. Then, the motion saliency map is obtained by applying a multiscale motion contrast detection method in order to highlight the most salient motion regions. Finally, both static and motion saliency maps are integrated into a spherical dynamic saliency map. To illustrate the concept, the proposed attentive system is applied to real omnidirectional video sequences.
[1]
Kostas Daniilidis,et al.
Catadioptric Projective Geometry
,
2001,
International Journal of Computer Vision.
[2]
O. Meur,et al.
Predicting visual fixations on video based on low-level visual features
,
2007,
Vision Research.
[3]
S Ullman,et al.
Shifts in selective visual attention: towards the underlying neural circuitry.
,
1985,
Human neurobiology.
[4]
L. Itti.
Author address:
,
1999
.
[5]
Pascal Frossard,et al.
Multiresolution motion estimation for omnidirectional images
,
2005,
2005 13th European Signal Processing Conference.
[6]
Heinz Hügli,et al.
Visual Attention on the Sphere
,
2008,
IEEE Transactions on Image Processing.
[7]
Alexandre Bur,et al.
Computer models of dynamic visual attention
,
2009
.
[8]
Heinz Hügli,et al.
The spherical approach to omnidirectional visual attention
,
2008,
2008 16th European Signal Processing Conference.