This paper presents some interactive features we have added on our street-view web navigation application. Our system allows to navigate through a huge amount of data (panoramas and laser clouds) and also to interact with it. We will detail 4 aspects of this interactivity. First, the labelling, displaying of features directly into the images in the 3D space, useful for general public but also researchers in image processing and computer vision. Secondly we propose a crowd sourcing mode for blurring people not automatically detected. Thirdly we offer the possibility for the web user to localize and measure in 3D all objects visible in the images by plotting only in one image. Finally we developed a multimedia editor that allows public administrations (like town halls, museums, operas, theaters, etc.) to add interactive content like video or images at the exact 3D position/orientation/size they chose with an easy manipulating editor to augment with realism the static scenes with dynamic or fresher elements.
[1]
Jana Kosecka,et al.
Piecewise planar city 3D modeling from street view panoramic sequences
,
2009,
2009 IEEE Conference on Computer Vision and Pattern Recognition.
[2]
Manuel Blum,et al.
reCAPTCHA: Human-Based Character Recognition via Web Security Measures
,
2008,
Science.
[3]
Jianxiong Xiao,et al.
Image-based street-side city modeling
,
2009,
ACM Trans. Graph..
[4]
Antonio Torralba,et al.
LabelMe: A Database and Web-Based Tool for Image Annotation
,
2008,
International Journal of Computer Vision.
[5]
Christian Früh,et al.
Data Processing Algorithms for Generating Textured 3D Building Facade Meshes from Laser Scans and Camera Images
,
2005,
International Journal of Computer Vision.