NeRF-Texture: Texture Synthesis with Neural Radiance Fields

Texture synthesis is a fundamental problem in computer graphics that would benefit various applications. Existing methods are effective in handling 2D image textures. In contrast, many real-world textures contain meso-structure in the 3D geometry space, such as grass, leaves, and fabrics, which cannot be effectively modeled using only 2D image textures. We propose a novel texture synthesis method with Neural Radiance Fields (NeRF) to capture and synthesize textures from given multi-view images. In the proposed NeRF texture representation, a scene with fine geometric details is disentangled into the meso-structure textures and the underlying base shape. This allows textures with meso-structure to be effectively learned as latent features situated on the base shape, which are fed into a NeRF decoder trained simultaneously to represent the rich view-dependent appearance. Using this implicit representation, we can synthesize NeRF-based textures through patch matching of latent features. However, inconsistencies between the metrics of the reconstructed content space and the latent feature space may compromise the synthesis quality. To enhance matching performance, we further regularize the distribution of latent features by incorporating a clustering constraint. Experimental results and evaluations demonstrate the effectiveness of our approach.

[1]  Andreas Geiger,et al.  NeRFPlayer: A Streamable Dynamic Scene Representation with Decomposed Neural Radiance Fields , 2022, IEEE Transactions on Visualization and Computer Graphics.

[2]  Minghua Liu,et al.  Approximate convex decomposition for 3D meshes with collision-aware concavity and tree search , 2022, ACM Trans. Graph..

[3]  Marios Papas,et al.  NeRF‐Tex: Neural Reflectance Field Textures , 2022, EGSR.

[4]  T. Müller,et al.  Instant neural graphics primitives with a multiresolution hash encoding , 2022, ACM Trans. Graph..

[5]  Panos Achlioptas,et al.  NeROIC: Neural Rendering of Objects from Online Image Collections , 2022, ACM Trans. Graph..

[6]  Paul Debevec,et al.  NeRFactor , 2021, ACM Trans. Graph..

[7]  Raja Giryes,et al.  Deep geometric texture synthesis , 2020, ACM Trans. Graph..

[8]  Dani Lischinski,et al.  Non-stationary texture synthesis by adversarial expansion , 2018, ACM Trans. Graph..

[9]  Anders Bjorholm Dahl,et al.  Large-Scale Data for Multiple-View Stereopsis , 2016, International Journal of Computer Vision.

[10]  Ralph R. Martin,et al.  PatchNet: a patch-based image representation for interactive library-driven image editing , 2013, ACM Trans. Graph..

[11]  Pat Hanrahan,et al.  A signal-processing framework for inverse rendering , 2001, SIGGRAPH.

[12]  Baining Guo,et al.  Real-time texture synthesis by patch-based sampling , 2001, TOGS.

[13]  Marc Levoy,et al.  Fast texture synthesis using tree-structured vector quantization , 2000, SIGGRAPH.

[14]  Heinrich Müller,et al.  Improved Laplacian Smoothing of Noisy Surface Meshes , 1999, Comput. Graph. Forum.

[15]  Bui Tuong Phong Illumination for computer generated pictures , 1975, Commun. ACM.

[16]  D. Shepard A two-dimensional interpolation function for irregularly-spaced data , 1968, ACM National Conference.

[17]  Daniel Cohen-Or,et al.  MeshCNN: a network with an edge , 2019, ACM Trans. Graph..