Video Temporal Super-Resolution Based on Self-similarity

We propose a method for making temporal super-resolution video from a single video by exploiting the self-similarity that exists in the spatio-temporal domain of videos. Temporal super-resolution is inherently ill-posed problem because there are an infinite number of high temporal resolution frames that can produce the same low temporal resolution frame. The key idea in this work to solve this ambiguity is exploiting self-similarity, i.e., a self-similar appearance that represents integrated motion of objects during each exposure time of videos with different temporal resolutions. In contrast with other methods that try to generate plausible intermediate frames based on temporal interpolation, our method can increase the temporal resolution of a given video, for instance by resolving one frame to two frames. Based on the quantitative evaluation of experimental results, we demonstrate that our method can generate enhanced videos with increased temporal resolution thereby recovering appearances of dynamic scenes.

[1]  Yaron Caspi,et al.  Under the supervision of , 2003 .

[2]  Michal Irani,et al.  Super-resolution from a single image , 2009, 2009 IEEE 12th International Conference on Computer Vision.

[3]  Takeo Kanade,et al.  Limits on Super-Resolution and How to Break Them , 2002, IEEE Trans. Pattern Anal. Mach. Intell..

[4]  Masahiko Yachida,et al.  Video Synthesis with High Spatio-Temporal Resolution Using Motion Compensation and Image Fusion in Wavelet Domain , 2006, ACCV.

[5]  Thomas Brox,et al.  High Accuracy Optical Flow Estimation Based on a Theory for Warping , 2004, ECCV.

[6]  Ashok Veeraraghavan,et al.  Optimal coded sampling for temporal super-resolution , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[7]  Shree K. Nayar,et al.  Motion deblurring using hybrid imaging , 2003, 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003. Proceedings..

[8]  Imari Sato,et al.  Adaptive frame and QP selection for temporally super-resolved full-exposure-time video , 2011, 2011 18th IEEE International Conference on Image Processing.

[9]  Michal Irani,et al.  Space-time super-resolution from a single video , 2011, CVPR 2011.

[10]  William T. Freeman,et al.  Example-Based Super-Resolution , 2002, IEEE Computer Graphics and Applications.

[11]  Jaeseok Kim,et al.  Motion compensated frame interpolation by new block-based motion estimation algorithm , 2004, IEEE Trans. Consumer Electron..

[12]  Sung-Jea Ko,et al.  New frame rate up-conversion using bi-directional motion estimation , 2000, IEEE Trans. Consumer Electron..

[13]  Stephen Lin,et al.  Image/video deblurring using a hybrid camera , 2008, 2008 IEEE Conference on Computer Vision and Pattern Recognition.

[14]  P. Belhumeur,et al.  Moving gradients: a path-based method for plausible image interpolation , 2009, SIGGRAPH 2009.

[15]  JongWon Kim,et al.  Motion-compensated frame interpolation scheme for H.263 codec , 1999, ISCAS'99. Proceedings of the 1999 IEEE International Symposium on Circuits and Systems VLSI (Cat. No.99CH36349).

[16]  Imari Sato,et al.  Compression using self-similarity-based temporal super-resolution for full-exposure-time video , 2011, 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[17]  David G. Lowe,et al.  Fast Approximate Nearest Neighbors with Automatic Algorithm Configuration , 2009, VISAPP.

[18]  Richard Szeliski,et al.  A Database and Evaluation Methodology for Optical Flow , 2007, ICCV.