Rapid material capture through sparse and multiplexed measurements

Abstract Among the many models for material appearance, data-driven representations like bidirectional texture functions (BTFs) play an important role as they provide accurate real-time reproduction of complex light transport effects such as interreflections. However, their acquisition involves time-consuming capturing of many thousands of bidirectional samples in order to avoid interpolation artifacts. Furthermore, high dynamic range imaging including many and long exposure steps is necessary in the presence of low albedo or self-shadowing. So far, these problems have been dealt with separately by means of sparse reconstruction and multiplexed illumination techniques, respectively. Existing methods rely on data-driven models learned on data that has been range-reduced in a way that made their simultaneous application impossible. In this paper, we address both problems at once through a novel method for learning data-driven appearance models, based on moving the dynamic range reduction from the data to the metric. Specifically, we learn models by minimizing the relative L 2 error on the original data instead of the absolute L 2 error on range-reduced data. We demonstrate that the models thus obtained allow for faithful reconstruction of material appearance from sparse and illumination-multiplexed measurements, greatly reducing both the number of images and the shutter times required. As a result, we are able to reduce acquisition times down to the order of minutes from what used to be the order of hours.

[1]  Jirí Filip,et al.  Bidirectional Texture Function Modeling: A State of the Art Survey , 2009, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[2]  Ashok Veeraraghavan,et al.  Can we beat Hadamard multiplexing? Data driven design and analysis for computational imaging systems , 2014, 2014 IEEE International Conference on Computational Photography (ICCP).

[3]  Christopher Schwartz,et al.  Design and Implementation of Practical Bidirectional Texture Function Measurement Devices Focusing on the Developments at the University of Bonn , 2014, Sensors.

[4]  Shree K. Nayar,et al.  Multiplexing for Optimal Lighting , 2007, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[5]  Michael Guthe,et al.  High dynamic range preserving compression of light fields and reflectance fields , 2007, AFRIGRAPH '07.

[6]  Andrew Gardner,et al.  Performance relighting and reflectance transformation with time-multiplexed illumination , 2005, SIGGRAPH 2005.

[7]  Pieter Peers,et al.  Compressive light transport sensing , 2009, ACM Trans. Graph..

[8]  Ralph R. Martin,et al.  Merging and Splitting Eigenspace Models , 2000, IEEE Trans. Pattern Anal. Mach. Intell..

[9]  Reinhard Klein,et al.  Advances in geometry and reflectance acquisition (course notes) , 2015, SIGGRAPH Asia Courses.

[10]  Gordon Wetzstein,et al.  Compressive light field photography using overcomplete dictionaries and optimized projections , 2013, ACM Trans. Graph..

[11]  Jaakko Lehtinen,et al.  Practical SVBRDF capture in the frequency domain , 2013, ACM Trans. Graph..

[12]  Baining Guo,et al.  Manifold bootstrapping for SVBRDF capture , 2010, SIGGRAPH 2010.

[13]  Jannik Boll Nielsen,et al.  On optimal, minimal BRDF sampling for reflectance acquisition , 2015, ACM Trans. Graph..

[14]  Jirí Filip,et al.  Minimal Sampling for Effective Acquisition of Anisotropic BRDFs , 2016, Comput. Graph. Forum.

[15]  Shree K. Nayar,et al.  Multispectral Imaging Using Multiplexed Illumination , 2007, 2007 IEEE 11th International Conference on Computer Vision.

[16]  Tien-Tsin Wong,et al.  Image-based Rendering with Controllable Illumination , 1997, Rendering Techniques.

[17]  Shree K. Nayar,et al.  Reflectance and texture of real-world surfaces , 1999, TOGS.

[18]  Jaakko Lehtinen,et al.  Two-shot SVBRDF capture for stationary materials , 2015, ACM Trans. Graph..

[19]  Jannik Boll Nielsen,et al.  Minimal BRDF sampling for two-shot near-field reflectance acquisition , 2016, ACM Trans. Graph..

[20]  Wojciech Matusik,et al.  A data-driven reflectance model , 2003, ACM Trans. Graph..

[21]  Christopher Schwartz,et al.  Data Driven Surface Reflectance from Sparse and Irregular Samples , 2012, Comput. Graph. Forum.

[22]  Yukinobu Taniguchi,et al.  Dense Light Transport for Relighting Computation Using Orthogonal Illumination Based on Walsh-Hadamard Matrix , 2016, IEICE Trans. Inf. Syst..

[23]  Jaakko Lehtinen,et al.  Reflectance modeling by neural texture synthesis , 2016, ACM Trans. Graph..

[24]  Sabine Süsstrunk,et al.  Scene Relighting with Smartphones , 2016 .

[25]  Yong Yu,et al.  Sparse-as-possible SVBRDF acquisition , 2016, ACM Trans. Graph..

[26]  Wojciech Matusik,et al.  Efficient Isotropic BRDF Measurement , 2003, Rendering Techniques.

[27]  Reinhard Klein,et al.  Fast Multiplexed Acquisition of High-dynamic-range Material Appearance , 2015, International Symposium on Vision, Modeling, and Visualization.

[28]  Ralf Sarlette,et al.  Acquisition, Synthesis, and Rendering of Bidirectional Texture Functions , 2005, Comput. Graph. Forum.

[29]  Frédo Durand,et al.  Unstructured Light Fields , 2012, Comput. Graph. Forum.

[30]  F. L. Hitchcock The Expression of a Tensor or a Polyadic as a Sum of Products , 1927 .

[31]  Derek Nowrouzezahrai,et al.  A Non-Parametric Factor Microfacet Model for Isotropic BRDFs , 2016, ACM Trans. Graph..

[32]  Jiyang Yu,et al.  Sparse Sampling for Image-Based SVBRDF Acquisition , 2016, MAM@EGSR.

[33]  Jonas Unger,et al.  Compressive Image Reconstruction in Reduced Union of Subspaces , 2015, Comput. Graph. Forum.

[34]  Reinhard Klein,et al.  Patch-based sparse reconstruction of material BTFs. , 2014 .

[35]  Michal Haindl,et al.  Visual Texture: Accurate Material Appearance Measurement, Representation and Modeling , 2013 .

[36]  Chia-Kai Liang,et al.  Programmable aperture photography: multiplexed light field acquisition , 2008, SIGGRAPH 2008.

[37]  Yoav Y. Schechner,et al.  Illumination Multiplexing within Fundamental Limits , 2007, 2007 IEEE Conference on Computer Vision and Pattern Recognition.