Sub-band Energy Constraints for Self-Similarity Based Super-resolution

In this paper, we propose a new self-similarity based single image super-resolution (SR) algorithm that is able to better synthesize fine textural details of the image. Conventional self-similarity based SR typically uses scaled down version(s) of the given image to first build a dictionary of low-resolution (LR) and high-resolution (HR) image patches, which is then used to predict the HR patches for each LR patch of the given image. However, metrics like pixel wise sum of squared differences (L2 distance) make it difficult to find matches for high frequency textured patches in the dictionary. Textural details are thus often smoothed out in the final image. In this paper, we propose a method to compensate for this loss of textural detail. Our algorithm uses the responses of a bank of orientation selective band pass filters to represent texture instead of the spatial variation of intensity values directly. Specifically, we use the energies contained in different sub-bands of an image patch to separate different types of details of a texture, which we then impose as additional priors on the patches of the super-resolved image. Our experiments show that for each patch, the low energy sub-bands (which correspond to fine textural details) get severely attenuated during conventional L2 distance based SR. We propose a method to learn this attenuation of sub-band energies in the patches, using scaled down version(s) of the given image itself (without requiring external training databases), and thus propose a way of compensating for the energy loss in these sub-bands. We demonstrate that as a consequence, our SR results appear richer in texture and closer to the ground truth as compared to several other state-of-the-art methods.

[1]  Michal Irani,et al.  Super-resolution from a single image , 2009, 2009 IEEE 12th International Conference on Computer Vision.

[2]  William T. Freeman,et al.  Learning low-level vision , 1999, Proceedings of the Seventh IEEE International Conference on Computer Vision.

[3]  Edward H. Adelson,et al.  Shiftable multiscale transforms , 1992, IEEE Trans. Inf. Theory.

[4]  Raanan Fattal,et al.  Image upsampling via imposed edge statistics , 2007, ACM Trans. Graph..

[5]  Michael F. Barnsley,et al.  Fractals everywhere , 1988 .

[6]  Narendra Ahuja,et al.  Super-resolving Noisy Images , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition.

[7]  Raanan Fattal,et al.  Image and video upscaling from local self-examples , 2011, TOGS.

[8]  J. Preston Ξ-filters , 1983 .

[9]  Eero P. Simoncelli,et al.  A Parametric Texture Model Based on Joint Statistics of Complex Wavelet Coefficients , 2000, International Journal of Computer Vision.

[10]  Michal Irani,et al.  Internal statistics of a single natural image , 2011, CVPR 2011.

[11]  Michal Irani,et al.  Improving resolution by image registration , 1991, CVGIP Graph. Model. Image Process..

[12]  William T. Freeman,et al.  Presented at: 2nd Annual IEEE International Conference on Image , 1995 .

[13]  Eero P. Simoncelli,et al.  Image quality assessment: from error visibility to structural similarity , 2004, IEEE Transactions on Image Processing.

[14]  Harry Shum,et al.  Gradient Profile Prior and Its Applications in Image Super-Resolution and Enhancement , 2011, IEEE Transactions on Image Processing.

[15]  Thomas S. Huang,et al.  Image Super-Resolution Via Sparse Representation , 2010, IEEE Transactions on Image Processing.

[16]  Song-Chun Zhu Filters, Random Fields and Maximum Entropy (FRAME): Towards a Unified Theory for Texture Modeling , 1998 .

[17]  Mehran Ebrahimi,et al.  Solving the Inverse Problem of Image Zooming Using "Self-Examples" , 2007, ICIAR.