Video flickering removal using temporal reconstruction optimization

In this paper, we introduce an approach to remove the flickers in the videos, and the flickers are caused by applying image-based processing methods to original videos frame by frame. First, we propose a multi-frame based video flicker removal method. We utilize multiple temporally corresponding frames to reconstruct the flickering frame. Compared with traditional methods, which reconstruct the flickering frame just from an adjacent frame, reconstruction with multiple temporally corresponding frames reduces the warp inaccuracy. Then, we optimize our video flickering method from following aspects. On the one hand, we detect the flickering frames in the video sequence with temporal consistency metrics, and just reconstructing the flickering frames can accelerate the algorithm greatly. On the other hand, we just choose the previous temporally corresponding frames to reconstruct the output frames. We also accelerate our video flicker removal with GPU. Qualitative experimental results demonstrate the efficiency of our proposed video flicker method. With algorithmic optimization and GPU acceleration, the time complexity of our method also outperforms traditional video temporal coherence methods.

[1]  Antonio Torralba,et al.  SIFT Flow: Dense Correspondence across Scenes and Its Applications , 2011, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[2]  Weiguo Gong,et al.  Temporal Consistency Based Method for Blind Video Deblurring , 2014, 2014 22nd International Conference on Pattern Recognition.

[3]  Frédo Durand,et al.  Light mixture estimation for spatially varying white balance , 2008, ACM Trans. Graph..

[4]  Hongguang Li,et al.  Haze removal for unmanned aerial vehicle aerial video based on spatial-temporal coherence optimisation , 2018, IET Image Process..

[5]  Soo-Chang Pei,et al.  Video Halftoning Preserving Temporal Consistency , 2007, 2007 IEEE International Conference on Multimedia and Expo.

[6]  Luming Zhang,et al.  Action2Activity: Recognizing Complex Activities from Sensor Data , 2015, IJCAI.

[7]  Ketan Tang,et al.  Investigating Haze-Relevant Features in a Learning Framework for Image Dehazing , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition.

[8]  Kai-Kuang Ma,et al.  Content-adaptive temporal consistency enhancement for depth video , 2012, ICIP.

[9]  Wolfgang Effelsberg,et al.  Flicker reduction in tone mapped high dynamic range video , 2011, Electronic Imaging.

[10]  Alexei A. Efros,et al.  Fast bilateral filtering for the display of high-dynamic-range images , 2002 .

[11]  Sylvain Paris,et al.  Blind video temporal consistency , 2015, ACM Trans. Graph..

[12]  Saumik Bhattacharya,et al.  Restoration of scene flicker using video decomposition , 2015, 2015 International Conference on Signal Processing, Computing and Control (ISPCC).

[13]  Qionghai Dai,et al.  Intrinsic video and applications , 2014, ACM Trans. Graph..

[14]  Zeev Farbman,et al.  Tonal stabilization of video , 2011, SIGGRAPH 2011.

[15]  Chun-Rong Huang,et al.  Temporal Color Consistency-Based Video Reproduction for Dichromats , 2011, IEEE Transactions on Multimedia.

[16]  Eli Shechtman,et al.  Patch-based high dynamic range video , 2013, ACM Trans. Graph..

[17]  Yao-Hsien Huang,et al.  An effective algorithm for image sequence color transfer , 2006, Math. Comput. Model..

[18]  Luming Zhang,et al.  Fortune Teller: Predicting Your Career Path , 2016, AAAI.

[19]  Bobby Bodenheimer,et al.  Synthesis and evaluation of linear motion transitions , 2008, TOGS.

[20]  Aljoscha Smolic,et al.  Suplemental Material for Temporally Coherent Local Tone Mapping of HDR Video , 2014 .

[21]  David S. Rosenblum,et al.  From action to activity: Sensor-based activity recognition , 2016, Neurocomputing.

[22]  Rafal Mantiuk,et al.  Display adaptive tone mapping , 2008, ACM Trans. Graph..

[23]  Ki Tae Park,et al.  Video dehazing without flicker artifacts using adaptive temporal average , 2014, The 18th IEEE International Symposium on Consumer Electronics (ISCE 2014).

[24]  Sylvain Paris,et al.  Example-based video color grading , 2013, ACM Trans. Graph..

[25]  Bodo Rosenhahn,et al.  Occlusion-Aware Method for Temporally Consistent Superpixels , 2019, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[26]  Shao-Yi Chien,et al.  Occlusion-aware Video Temporal Consistency , 2017, ACM Multimedia.

[27]  Noah Snavely,et al.  Intrinsic images in the wild , 2014, ACM Trans. Graph..

[28]  Markus H. Gross,et al.  Fast and Stable Color Balancing for Images and Augmented Reality , 2012, 2012 Second International Conference on 3D Imaging, Modeling, Processing, Visualization & Transmission.

[29]  Dani Lischinski,et al.  Optimizing color consistency in photo collections , 2013, ACM Trans. Graph..

[30]  Alan L. Yuille,et al.  Region-based temporally consistent video post-processing , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[31]  Markus Gross,et al.  Practical temporal consistency for image-based graphics applications , 2012, ACM Trans. Graph..

[32]  Pascal Fua,et al.  SLIC Superpixels Compared to State-of-the-Art Superpixel Methods , 2012, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[33]  Ali Kanj,et al.  Flicker removal and superpixel-based motion tracking for high speed videos , 2017, 2017 IEEE International Conference on Image Processing (ICIP).

[34]  英樹 藤堂,et al.  Interactive intrinsic video editing , 2014, ACM Trans. Graph..

[35]  R. Tibshirani,et al.  Least angle regression , 2004, math/0406456.