Correct interpretation of the events occurring in video has a key role in the improvement of video surveillance systems and the desired automatic decision making. Accurate analysis of the context of the scenes in a video can contribute to the semantic understanding of the video. In this paper, we present our research on context analysis within video sequences focusing on fast automatic detection of sky and road. Regarding road detection, the goal of the present study is to develop a motion-based context analysis to annotate roads and to restrict the computationally heavy search for moving objects to the areas where the motion is detected. Our sky detection approach is adopted from Zafarifar et al. [1]. To evaluate the results, the average Coverability Rate (CR) is used. Results of the road detection algorithm are yielding a CR = 0.97 in a single highway video sequence. Regarding sky detection, we illustrate that our algorithm performs well comparing with [2] showing a CR of 0.98.
[1]
Bahman Zafarifar.
Adaptive modeling of sky for video processing and coding applications
,
2006
.
[2]
Lutz Priese,et al.
Sky Detection in CSC-segmented Color Images
,
2009,
VISAPP.
[3]
Svitlana Zinger,et al.
Context analysis : sky, water and motion
,
2011
.
[4]
Fang Li,et al.
Hierarchical Identification of Palmprint using Line-based Hough Transform
,
2006,
18th International Conference on Pattern Recognition (ICPR'06).
[5]
Jianping Fan,et al.
Multi-level annotation of natural scenes using dominant image components and semantic concepts
,
2004,
MULTIMEDIA '04.