A SEMI-AUTOMATIC PROCEDURE FOR TEXTURING OF LASER SCANNING POINT CLOUDS WITH GOOGLE STREETVIEW IMAGES

We introduce a method to texture 3D urban models with photographs that even works for Google Streetview images and can be done with currently available free software. This allows realistic texturing, even when it is not possible or cost-effective to (re)visit a scanned site to take textured scans or photographs. Mapping a photograph onto a 3D model requires knowledge of the intrinsic and extrinsic camera parameters. The common way to obtain intrinsic parameters of a camera is by taking several photographs of a calibration object with a priori known structure. The extra challenge of using images from a database such as Google Streetview, rather than your own photographs, is that it does not allow for any controlled calibration. To overcome this limitation, we propose to calibrate the panoramic viewer of Google Streetview using Structure from Motion (SfM) on any structure of which Google Streetview offers views from multiple angles. After this, the extrinsic parameters for any other view can be calculated from 3 or more tie points between the image from Google Streetview and a 3D model of the scene. These point correspondences can either be obtained automatically or selected by manual annotation. We demonstrate how this procedure provides realistic 3D urban models in an easy and effective way, by using it to texture a publicly available point cloud from a terrestrial laser scan made in Bremen, Germany, with a screenshot from Google Streetview, after estimating the focal length from views from Paris, France.

[1]  Steven M. Seitz,et al.  Multicore bundle adjustment , 2011, CVPR 2011.

[2]  Roberto Scopigno,et al.  Image‐to‐Geometry Registration: a Mutual Information Method exploiting Illumination‐related Geometric Properties , 2009, Comput. Graph. Forum.

[3]  Richard Szeliski,et al.  Modeling the World from Internet Photo Collections , 2008, International Journal of Computer Vision.

[4]  Hans-Peter Seidel,et al.  Automated texture registration and stitching for real world models , 2000, Proceedings the Eighth Pacific Conference on Computer Graphics and Applications.

[5]  Paolo Cignoni,et al.  MeshLab: an Open-Source Mesh Processing Tool , 2008, Eurographics Italian Chapter Conference.

[6]  Joel A. Hesch,et al.  A Direct Least-Squares (DLS) method for PnP , 2011, 2011 International Conference on Computer Vision.

[7]  Ye Duan,et al.  Integrating LIDAR Range Scans and Photographs with Temporal Changes , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops.

[8]  Yubin Kuang,et al.  Revisiting the PnP Problem: A Fast, General and Optimal Solution , 2013, 2013 IEEE International Conference on Computer Vision.

[9]  Dima Damen,et al.  Recognizing linked events: Searching the space of feasible explanations , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.

[10]  Paul A. Viola,et al.  Alignment by Maximization of Mutual Information , 1997, International Journal of Computer Vision.