Video Deflickering Using Multi - Frame Optimization

In this paper, we propose an approach of removing flickering artifacts in a video. The video is obtained via applying the image-based processing methods to an original non-flickering video. In traditional video deflickering methods, we always reconstruct flickering frames with nonflickering frames, which fail to keep video spatial consistency and are always designed to address certain video flickering artifact under specific condition. On the contrary, we propose a general multiple frames based video deflickering approach, where we take both temporal and spatial coherence into account. Instead of reconstructing a flickering frame only from its last frame, we warp multiple corresponding frames to reconstruct the flickering frame, so the warp inaccuracy in the reconstruction process can be reduced. By taking the advantage of video fidelity, temporal coherence and spatial coherence, we formulate video deflickeing objective as a least-squares energy. A non-flickering output video can be obtained via solving the constructed energy formulation with the least angle regression. Results of visual quality, objective measurement and user study demonstrate the efficiency of our proposed multiple frames based video deflickering approach.

[1]  Markus Gross,et al.  Practical temporal consistency for image-based graphics applications , 2012, ACM Trans. Graph..

[2]  Sylvain Paris,et al.  Blind video temporal consistency , 2015, ACM Trans. Graph..

[3]  Antonio Torralba,et al.  SIFT Flow: Dense Correspondence across Scenes and Its Applications , 2011, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[4]  Markus H. Gross,et al.  Fast and Stable Color Balancing for Images and Augmented Reality , 2012, 2012 Second International Conference on 3D Imaging, Modeling, Processing, Visualization & Transmission.

[5]  Alexei A. Efros,et al.  Fast bilateral filtering for the display of high-dynamic-range images , 2002 .

[6]  Aljoscha Smolic,et al.  Suplemental Material for Temporally Coherent Local Tone Mapping of HDR Video , 2014 .

[7]  Zeev Farbman,et al.  Tonal stabilization of video , 2011, SIGGRAPH 2011.

[8]  Qionghai Dai,et al.  Intrinsic video and applications , 2014, ACM Trans. Graph..

[9]  Pascal Fua,et al.  SLIC Superpixels Compared to State-of-the-Art Superpixel Methods , 2012, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[10]  R. Tibshirani,et al.  Least angle regression , 2004, math/0406456.

[11]  Frédo Durand,et al.  Light mixture estimation for spatially varying white balance , 2008, ACM Trans. Graph..

[12]  Dani Lischinski,et al.  Optimizing color consistency in photo collections , 2013, ACM Trans. Graph..

[13]  英樹 藤堂,et al.  Interactive intrinsic video editing , 2014, ACM Trans. Graph..

[14]  Rafal Mantiuk,et al.  Display adaptive tone mapping , 2008, SIGGRAPH 2008.

[15]  Chun-Rong Huang,et al.  Temporal Color Consistency-Based Video Reproduction for Dichromats , 2011, IEEE Transactions on Multimedia.

[16]  Sylvain Paris,et al.  Example-based video color grading , 2013, ACM Trans. Graph..

[17]  Noah Snavely,et al.  Intrinsic images in the wild , 2014, ACM Trans. Graph..

[18]  Alan L. Yuille,et al.  Region-based temporally consistent video post-processing , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[19]  Ketan Tang,et al.  Investigating Haze-Relevant Features in a Learning Framework for Image Dehazing , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition.