Exploring Viewable Angle Information in Georeferenced Video Search

As positioning data and other sensor information such as orientation measurement became powerful contextual features generated by mobile devices during video recording, a model capturing geographic field-of-view (FOV) has been developed for georeferenced video search. The accurate representation of an FOV is through the geometric shape of a circular sector. However, previous work simply employed a rectilinear vector model to represent the coverage area of a video scene. In this study, we propose to use a novel circular sector model with beginning-ending vectors for FOV representation which additionally explores viewable angle information. Its major advantage is that it leads to a more accurate georeferenced video search without false positives or false negatives (which occur in previous model using single vector). We demonstrate how our model can be applied to perform different types of overlap queries for spatial data selection in a unified framework, while providing competitive performance in terms of efficiency.