Real-time texturing and visualization of a 2.5D terrain model from live LiDAR and RGB data streaming in a remote sensing workflow

2.5D terrain model generation from a data stream provides high quality data, which can be used for assisting situational awareness, conducting operations and training in simulated environments. The objective of our research is to design and implement a real-time texturing and visualization of a 2.5D terrain model from live LiDAR and a RGB data streaming in a high performance remote sensing workflow. To achieve real-time processing, the incoming data streams are evaluated in small patches. In addition, the calculation time per patch must be lower than the recording/sampling time to ensure a real-time processing. Data meshing and projection of the images onto the mesh cannot be implemented in real-time using an off-the-shelf CPU. However, most of these steps are highly vectorizable (e.g., the projection of each LiDAR point into the camera images). In fact, modern graphics cards are highly specialized in computing such data types. Therefore, all computationally intensive steps were performed in the graphics card. Most of the steps for the terrain model generation have been implemented in CUDA and OpenCL. We compare both technologies regarding calculation times and memory management. The fastest technology was selected for each calculation step. Since the model generation is faster than the data acquisition time, the implemented software is real-time. Our approach has been embedded and tested in a real-time system consisting of a modern reconnaissance system connected to a ground control station via a radio link. During a flight, a human operator in the ground control station is able to observe a texturized terrain model, which was recently generated. The user is able to zoom in an interesting area.