Robust super-resolution for interactive video navigation
暂无分享,去创建一个
One of the main technical limitations in interactive systems for next-generation audio-visual experiences is the limited resolution of the captured contents. The presented method tackles the problem of generating super-resolved versions of input video frames, thus allowing the user to visualize the captured visual contents at any desired scale with minimal degradation. First, the low-frequency band of the super-resolved video frame is estimated as an up-scaled interpolation of the low-resolution frame. Then, the high-frequency band is extrapolated from the low-resolution frame by exploiting local cross-scale self-similarity. The introduction of a suitable image prior in both stages allows to robustly enhance the spatial resolution even in video sequences containing aliasing. The most demanding processing stages of the presented algorithm have been implemented on graphics hardware (GPU). The experimental results show a level of quality similar to that of state-of-the-art methods, with the advantages of real-time processing and robustness against spatial aliasing.
[1] Michal Irani,et al. Super-resolution from a single image , 2009, 2009 IEEE 12th International Conference on Computer Vision.
[2] Josep Ramon Casas Pla,et al. A compact 3D representation for multi-view video , 2011 .
[3] Raanan Fattal,et al. Image and video upscaling from local self-examples , 2011, TOGS.
[4] Heung-Yeung Shum,et al. Fundamental limits of reconstruction-based superresolution algorithms under local translation , 2004 .