Photometric Stabilization for Fast‐forward Videos

Videos captured by consumer cameras often exhibit temporal variations in color and tone that are caused by camera auto‐adjustments like white‐balance and exposure. When such videos are sub‐sampled to play fast‐forward, as in the increasingly popular forms of timelapse and hyperlapse videos, these temporal variations are exacerbated and appear as visually disturbing high frequency flickering. Previous techniques to photometrically stabilize videos typically rely on computing dense correspondences between video frames, and use these correspondences to remove all color changes in the video sequences. However, this approach is limited in fast‐forward videos that often have large content changes and also might exhibit changes in scene illumination that should be preserved. In this work, we propose a novel photometric stabilization algorithm for fast‐forward videos that is robust to large content‐variation across frames. We compute pairwise color and tone transformations between neighboring frames and smooth these pair‐wise transformations while taking in account the possibility of scene/content variations. This allows us to eliminate high‐frequency fluctuations, while still adapting to real variations in scene characteristics. We evaluate our technique on a new dataset consisting of controlled synthetic and real videos, and demonstrate that our techniques outperforms the state‐of‐the‐art.

[1]  Sylvain Paris,et al.  Blind video temporal consistency , 2015, ACM Trans. Graph..

[2]  Stephen Lin,et al.  A New In-Camera Imaging Model for Color Computer Vision and Its Application , 2012, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[3]  Xiang Li,et al.  Video Tonal Stabilization via Color States Smoothing , 2014, IEEE Transactions on Image Processing.

[4]  Shmuel Peleg,et al.  EgoSampling: Fast-forward and stereo for egocentric videos , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[5]  Gary R. Bradski,et al.  ORB: An efficient alternative to SIFT or SURF , 2011, 2011 International Conference on Computer Vision.

[6]  A.C. Kokaram,et al.  N-dimensional probability density function transfer and its application to color transfer , 2005, Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1.

[7]  Javier Vazquez-Corral,et al.  Color Stabilization Along Time and Across Shots of the Same Scene, for One or Several Cameras of Unknown Specifications , 2014, IEEE Transactions on Image Processing.

[8]  Michael F. Cohen,et al.  Real-time hyperlapse creation via optimal frame selection , 2015, ACM Trans. Graph..

[9]  Leonidas J. Guibas,et al.  The Earth Mover's Distance as a Metric for Image Retrieval , 2000, International Journal of Computer Vision.

[10]  François Pitié,et al.  Automated colour grading using colour distribution transfer , 2007, Comput. Vis. Image Underst..

[11]  Javier Vazquez-Corral,et al.  Simultaneous Blind Gamma Estimation , 2015, IEEE Signal Processing Letters.

[12]  Irfan A. Essa,et al.  Auto-directed video stabilization with robust L1 optimal camera paths , 2011, CVPR 2011.

[13]  Neus Sabater,et al.  Motion Driven Tonal Stabilization , 2015, IEEE Transactions on Image Processing.

[14]  Zeev Farbman,et al.  Tonal stabilization of video , 2011, SIGGRAPH 2011.

[15]  Sylvain Paris,et al.  Example-based video color grading , 2013, ACM Trans. Graph..