Full-Resolution Image Segmentation Model Combining Multi-Source Input Information

In this paper, a full-resolution image segmentation model with multi-source input information is proposed and applied to road extraction. The convolution-deconvolution network is adopted as the backbone network, and a full resolution network branch is added into the backbone network. A data exchange mechanism is established between the backbone network and the full resolution branch, which not only overcomes the problems of reduced feature resolution and loss of detailed information caused by repeated pooling operations, but also aggregates multi-scale features in convolution stage. The aggregated features are transferred to the corresponding layers in deconvolution stage, which enhances the feature fusion. Multi-source images are used as input, and the predictions are fused by weighting at the end of the network to highlight the target while effectively suppressing the misclassification. Experiments on Road Detection Dataset show that the results of the proposed method are superior to those of state-of-the-art comparison methods.