Dynamic Texture Synthesis: Compact Models Based on Luminance-Chrominance Color Representation

Dynamic textures are sequences of images showing temporal regularity. Examples can be found in videos representing smoke, flames, ocean waves, wind-shaken forests, etc. Dynamic texture modelling and synthesis has usually been done considering RGB color images. In this paper, we analyze the use of different color encodings, which permit to model luminance and chrominance information separately. We find that this separation is more appropriate, since it takes advantage of the spatial and temporal characteristics of the color channels and leads to more flexible and compact representations. We show that compared to RGB, similar synthesis performance can be achieved using YCbCr or Lab color encodings, using half of the model coefficients and less computational power.

[1]  Song-Chun Zhu,et al.  A Generative Method for Textured Motion: Analysis and Synthesis , 2002, ECCV.

[2]  Stefano Soatto,et al.  Dynamic Textures , 2003, International Journal of Computer Vision.

[3]  Irfan A. Essa,et al.  Graphcut textures: image and video synthesis using graph cuts , 2003, ACM Trans. Graph..

[4]  Martin Szummer,et al.  Temporal texture modeling , 1996, Proceedings of 3rd IEEE International Conference on Image Processing.

[5]  Harry Shum,et al.  Synthesizing Dynamic Texture with Closed-Loop Linear Dynamic System , 2004, ECCV.

[6]  Richard Szeliski,et al.  Video textures , 2000, SIGGRAPH.

[7]  Stefano Soatto,et al.  Editable dynamic textures , 2002, SIGGRAPH '02.

[8]  Payam Saisan,et al.  Dynamic texture recognition , 2001, Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001.

[9]  Narendra Ahuja,et al.  Modeling Dynamic Textures Using Subspace Mixtures , 2005, 2005 IEEE International Conference on Multimedia and Expo.

[10]  Charles Poynton,et al.  Digital Video and HDTV Algorithms and Interfaces , 2012 .