Incorporating Geo-Tagged Mobile Videos into Context-Aware Augmented Reality Applications

In recent years, augmented-reality (AR) has been attracting extensive attentions from both the research community and industry as a new form of media, mixing virtual content into the physical world. However, the scarcity of the AR content and the lack of user contexts are major impediments to providing and representing rich and dynamic multimedia content on AR applications. In this study, we propose an approach to search and filter big multimedia data, specifically geo-tagged mobile videos, for context-aware AR applications. The challenge is to automatically search for interesting video segments out of a huge amount of user-generated mobile videos, which is one of the biggest multimedia data, to be efficiently incorporated into AR applications. We model the significance of video segments as AR content adopting camera shooting patterns defined in filming, such as panning, zooming, tracking and arching. Then, several efficient algorithms are proposed to search for such patterns using fine granular geospatial properties of the videos such as camera locations and viewing directions over time. Experiments with real-world geo-tagged video dataset show that the proposed algorithms effectively search for a large collection of user-generated mobile videos to identify top K significant video segments.

[1]  Xing Xie,et al.  Adaptive content recommendation for mobile users: Ordering recommendations using a hierarchical context model with granularity , 2014, Pervasive Mob. Comput..

[2]  Dieter Schmalstieg,et al.  Next-Generation Augmented Reality Browsers: Rich, Seamless, and Adaptive , 2014, Proceedings of the IEEE.

[3]  Kevin Wong,et al.  Third surface: an augmented world wide web for the physical world , 2014, UIST'14 Adjunct.

[4]  Manfred Reichert,et al.  Location-based Mobile Augmented Reality Applications - Challenges, Examples, Lessons Learned , 2014, WEBIST.

[5]  Nenad Stojanovic,et al.  Adaptive Augmented Reality for Cultural Heritage: ARtSENSE Project , 2012, EuroMed.

[6]  Jorma Laaksonen,et al.  An augmented reality interface to contextual information , 2011, Virtual Reality.

[7]  Kazutoshi Sumiya,et al.  Interoperable augmented web browsing for exploring virtual media in real space , 2009, LOCWEB '09.

[8]  Cyrus Shahabi,et al.  Effectively crowdsourcing the acquisition and analysis of visual data for disaster response , 2015, 2015 IEEE International Conference on Big Data (Big Data).

[9]  Minyi Guo,et al.  Survey on context-awareness in ubiquitous media , 2011, Multimedia Tools and Applications.

[10]  Hanan Samet,et al.  Foundations of multidimensional and metric data structures , 2006, Morgan Kaufmann series in data management systems.

[11]  Cyrus Shahabi,et al.  GeoUGV: user-generated mobile video dataset with fine granularity spatial metadata , 2016, MMSys.

[12]  Jens Grubert,et al.  Augmented Reality Browser Survey , 2012 .

[13]  Thomas Olsson,et al.  Online user survey on current mobile augmented reality applications , 2011, 2011 10th IEEE International Symposium on Mixed and Augmented Reality.

[14]  Cyrus Shahabi,et al.  Key Frame Selection Algorithms for Automatic Generation of Panoramic Images from Crowdsourced Geo-tagged Videos , 2014, W2GIS.

[15]  Roger Zimmermann,et al.  Viewable scene modeling for geospatial video search , 2008, ACM Multimedia.

[16]  Kaisa Väänänen,et al.  Expected user experience of mobile augmented reality services: a user study in the context of shopping centres , 2011, Personal and Ubiquitous Computing.

[17]  Jesús Fontecha,et al.  Achieving Adaptive Augmented Reality through Ontological Context-Awareness applied to AAL Scenarios , 2013, J. Univers. Comput. Sci..

[18]  Woontack Woo,et al.  Metadata Design for Location-Based Film Experience in Augmented Places , 2015, 2015 IEEE International Symposium on Mixed and Augmented Reality - Media, Art, Social Science, Humanities and Design.