Remote Sensing Image Super-Resolution via Dual-Resolution Network Based on Connected Attention Mechanism

Limited by hardware conditions and complex degradation processes, the obtained remote sensing images (RSIs) are often low-resolution (LR) data with insufficient high-frequency information. Image super-resolution (SR) aims to improve the spatial resolution of images and add reasonable detailed information. Although existing convolutional neural network (CNN)-based methods achieve good performance by adding residual structure and attention mechanism to the network, simply stacking the residual structure and embedding the attention module directly on the residual branch lead to localized use of features and information loss. To address the above problems, we propose a dual-resolution connected attention network (DRCAN). Specifically, a high-resolution (HR) learning branch is constructed to complement the mapping learning between LR images and HR images, and a connected attention module with residual learning is introduced to make full use of the different levels of intermediate layer features. Besides, we collect data at different resolutions from Google Earth to form a dataset named XD IPIU for RSIs SR. Extensive experiments demonstrate the effectiveness of the proposed model and DRCAN shows the state-of-the-art performance in terms of quantitative evaluation and visual quality.