Damage Assessment on Buildings using Multisensor Multimodal Very High Resolution Images and Ancillary Data

Satellite images are useful to prevent major disasters and mitigate their impact on populations. Their analysis is usually manually conducted by operators. Automatic processing of very high resolution (VHR) images is critical when the images to analyse are acquired with different modalities: acquisition angles, spatial resolution, or even sensor; however, this situation is frequent in an operational scope. We propose a method to assess damage on buildings using a pair of VHR images and ancillary data. We assess its robustness against the different modalities. We show that the performance of our methodology decreases with the acquisition angles difference but is robust against changes in spatial resolution and against the use of images acquired with different sensors. Even in extreme conditions, damaged buildings are well detected. Our methodology leads to a global performance from 72% with a difference angle of 80deg, to 93% for a difference angle of 24deg.