To appear in ACM TOG 33 ( 4 ) . How Do People Edit Light Fields ?

We present a thorough study to evaluate different light field editing interfaces, tools and workflows from a user perspective. This is of special relevance given the multidimensional nature of light fields, which may make common image editing tasks become complex in light field space. We additionally investigate the potential benefits of using depth information when editing, and the limitations imposed by imperfect depth reconstruction using current techniques. We perform two different experiments, collecting both objective and subjective data from a varied number of editing tasks of increasing complexity based on local point-and-click tools. In the first experiment, we rely on perfect depth from synthetic light fields, and focus on simple edits. This allows us to gain basic insight on light field editing, and to design a more advanced editing interface. This is then used in the second experiment, employing real light fields with imperfect reconstructed depth, and covering more advanced editing tasks. Our study shows that users can edit light fields with our tested interface and tools, even in the presence of imperfect depth. They follow different workflows depending on the task at hand, mostly relying on a combination of different depth cues. Last, we confirm our findings by asking a set of artists to freely edit both real and synthetic light fields. CR Categories: I.3.4 [Computer Graphics]: Graphics Utilities— Paint systems; I.3.6 [Computer Graphics]: Methodology and Techniques—Interaction techniques

[1]  Harry Shum,et al.  Interactive deformation of light fields , 2005, I3D '05.

[2]  Harry Shum,et al.  Pop-up light field: An interactive image-based modeling and rendering system , 2004, TOGS.

[3]  Fabio Pellacini,et al.  Toward evaluating lighting design interface paradigms for novice users , 2009, SIGGRAPH 2009.

[4]  Dani Lischinski,et al.  Non-rigid dense correspondence with applications for image enhancement , 2011, ACM Trans. Graph..

[5]  Diego Gutierrez,et al.  Efficient Propagation of Light Field Edits , 2021 .

[6]  Gordon Wetzstein,et al.  A survey on computational displays: Pushing the boundaries of optics, computation, and perception , 2013, Comput. Graph..

[7]  Daniel Reiter Horn,et al.  LightShop: interactive light field manipulation and rendering , 2007, SI3D.

[8]  Olga Sorkine-Hornung,et al.  Transfusive image manipulation , 2012, ACM Trans. Graph..

[9]  Yael Pritch,et al.  Scene reconstruction from high spatio-angular resolution light fields , 2013, ACM Trans. Graph..

[10]  Hans-Peter Seidel,et al.  Highlight microdisparity for improved gloss depiction , 2012, ACM Trans. Graph..

[11]  Frédo Durand,et al.  Unstructured Light Fields , 2012, Comput. Graph. Forum.

[12]  Scott Cohen,et al.  StereoCut: Consistent interactive object selection in stereo image pairs , 2011, 2011 International Conference on Computer Vision.

[13]  Fabio Pellacini,et al.  Toward Evaluating Progressive Rendering Methods in Appearance Design Tasks , 2012 .

[14]  Margrit Gelautz,et al.  Image-Based Stereoscopic painterly Rendering , 2004, Rendering Techniques.

[15]  Paul Haeberli,et al.  Paint by numbers: abstract image representations , 1990, SIGGRAPH.

[16]  Stephen Lin,et al.  Light field morphing using 2D features , 2005, IEEE Transactions on Visualization and Computer Graphics.

[17]  Sumanta N. Pattanaik,et al.  BRDF-Shop: creating physically correct bidirectional reflectance distribution functions , 2006, IEEE Computer Graphics and Applications.

[18]  Baining Guo,et al.  Feature-based light field morphing , 2002, ACM Trans. Graph..

[19]  Marc Levoy,et al.  Using plane + parallax for calibrating dense camera arrays , 2004, Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004..

[20]  Ravin Balakrishnan,et al.  Direct space-time trajectory control for visual media editing , 2013, CHI.

[21]  Marc Levoy,et al.  Light field rendering , 1996, SIGGRAPH.

[22]  Olga Sorkine-Hornung,et al.  A comparative study of image retargeting , 2010, ACM Trans. Graph..

[23]  Marc Alexa,et al.  Interactive light field painting , 2012, SIGGRAPH '12.

[24]  Christian Wallraven,et al.  Experimental Design: From User Studies to Psychophysics , 2011 .

[25]  Richard Szeliski,et al.  The lumigraph , 1996, SIGGRAPH.

[26]  William T. Freeman,et al.  Search-and-replace editing for personal photo collections , 2010, 2010 IEEE International Conference on Computational Photography (ICCP).

[27]  Holger Winnemöller,et al.  WYSIWYG stereo painting , 2013, I3D '13.

[28]  D. Hoaglin,et al.  Fine-Tuning Some Resistant Rules for Outlier Labeling , 1987 .

[29]  Kiriakos N. Kutulakos,et al.  Plenoptic Image Editing , 2004, International Journal of Computer Vision.

[30]  Matthias Zwicker,et al.  Stereoscopic 3D copy & paste , 2010, SIGGRAPH 2010.

[31]  Greg Humphreys,et al.  Physically Based Rendering: From Theory to Implementation , 2004 .

[32]  Sven Wanner,et al.  Globally consistent depth labeling of 4D light fields , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[33]  Enrico Gobbetti,et al.  Natural exploration of 3D massive models on large-scale light field displays using the FOX proximal navigation technique , 2012, Comput. Graph..

[34]  David Salesin,et al.  Video object annotation, navigation, and composition , 2008, UIST '08.

[35]  Gordon Wetzstein,et al.  Computational displays: combining optical fabrication, computational processing, and perceptual tricks to build the displays of the future , 2012, SIGGRAPH '12.

[36]  Fabio Pellacini,et al.  Toward evaluating material design interface paradigms for novice users , 2010, ACM Trans. Graph..

[37]  Roland W Fleming,et al.  Real-world illumination and the perception of surface reflectance properties. , 2003, Journal of vision.

[38]  Leonard McMillan,et al.  Dynamically reparameterized light fields , 2000, SIGGRAPH.