Neural Information Processing: 26th International Conference, ICONIP 2019, Sydney, NSW, Australia, December 12–15, 2019, Proceedings, Part V

Recently, a great variety of CNN-based methods have been proposed for single image super-resolution. But how to restore more high-frequency details is still an unsolved issue. It is easy to find that the low-frequency information is similar in a pair of low-resolution and high-resolution images. So the model only needs to pay more attention to the high-frequency information to restore more realistic images which have abundant details and meet human visual system better. In this paper, we propose a deep residual-dense attention network (RDAN) for image super-resolution. Specially, we propose a channel attention module to change the weight of each channel and a spatial attention module to rescale the region weight in a channel map, which can make the model focus more on the high-frequency information. Experimental results on five benchmark datasets show that RDAN is superior to those state-of-the-art methods for both accuracy and visual performance.