Fast sky and road detection for video context analysis

Correct interpretation of the events occurring in video has a key role in the improvement of video surveillance systems and the desired automatic decision making. Accurate analysis of the context of the scenes in a video can contribute to the semantic understanding of the video. In this paper, we present our research on context analysis within video sequences focusing on fast automatic detection of sky and road. Regarding road detection, the goal of the present study is to develop a motion-based context analysis to annotate roads and to restrict the computationally heavy search for moving objects to the areas where the motion is detected. Our sky detection approach is adopted from Zafarifar et al. [1]. To evaluate the results, the average Coverability Rate (CR) is used. Results of the road detection algorithm are yielding a CR = 0.97 in a single highway video sequence. Regarding sky detection, we illustrate that our algorithm performs well comparing with [2] showing a CR of 0.98.