Abstract. 3D point clouds, acquired by state-of-the-art terrestrial laser scanning techniques (TLS), provide spatial information about accuracies up to several millimetres. Unfortunately, common TLS data has no spectral information about the covered scene. However, the matching of TLS data with images is important for monoplotting purposes and point cloud colouration. Well-established methods solve this issue by matching of close range images and point cloud data by fitting optical camera systems on top of laser scanners or rather using ground control points. The approach addressed in this paper aims for the matching of 2D image and 3D point cloud data from a freely moving camera within an environment covered by a large 3D point cloud, e.g. a 3D city model. The key advantage of the free movement affects augmented reality applications or real time measurements. Therefore, a so-called real image, captured by a smartphone camera, has to be matched with a so-called synthetic image which consists of reverse projected 3D point cloud data to a synthetic projection centre whose exterior orientation parameters match the parameters of the image, assuming an ideal distortion free camera.
[1]
Wilhelm Benning,et al.
3D-Monoplotting – Kombinierte Auswertung von Laserscannerdaten und photogrammetrischen Aufnahmen
,
2004
.
[2]
Richard Szeliski,et al.
Computer Vision - Algorithms and Applications
,
2011,
Texts in Computer Science.
[3]
Roberto Manduchi,et al.
Bilateral filtering for gray and color images
,
1998,
Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271).
[4]
AUTOMATIC FEATURE MATCHING BETWEEN DIGITAL IMAGES AND 2 D REPRESENTATIONS OF A 3 D LASER SCANNER POINT CLOUD
,
2010
.
[5]
LINE-BASED REFERENCING BETWEEN IMAGES AND LASER SCANNER DATA FOR IMAGE-BASED POINT CLOUD INTERPRETATION IN A CAD-ENVIRONMENT
,
2008
.
[6]
Wolfgang Niemeier,et al.
Ausgleichungsrechnung: Statistische Auswertemethoden
,
2008
.