Decision Fusion of Remote-Sensing Data for Land Cover Classification

Abstract Very high spatial resolution (VHR) multispectral imagery enables a fine delineation of objects and a possible use of texture information. Other sensors provide a lower spatial resolution but an enhanced spectral or temporal information, permitting one to consider richer land cover semantics. So as to benefit from the complementary characteristics of these multimodal sources, a decision late fusion scheme is proposed. This makes it possible to benefit from the full capacities of each sensor, while dealing with both semantic and spatial uncertainties. The different remote-sensing modalities are first classified independently. Separate class membership maps are calculated and then merged at the pixel level, using decision fusion rules. A final label map is obtained from a global regularization scheme in order to deal with spatial uncertainties while conserving the contrasts from the initial images. It relies on a probabilistic graphical model involving a fit-to-data term related to merged class membership measures and an image-based contrast-sensitive regularization term. Conflict between sources can also be integrated into this scheme. Two experimental cases are presented. In the first case one considers the fusion of VHR multispectral imagery with lower spatial resolution hyperspectral imagery for fine-grained land cover classification problem in dense urban areas. In the second case one uses SPOT 6/7 satellite imagery and Sentinel-2 time series to extract urban area footprints through a two-step process: classifications are first merged in order to detect building objects, from which a urban area prior probability is derived and eventually merged to Sentinel-2 classification output for urban footprint detection.