Efficient View Path Planning for Autonomous Implicit Reconstruction

Implicit neural representations have shown promising potential for 3D scene reconstruction. Recent work applies it to autonomous 3D reconstruction by learning information gain for view path planning. Effective as it is, the computation of the information gain is expensive, and compared with that using volumetric representations, collision checking using the implicit representation for a 3D point is much slower. In the paper, we propose to 1) leverage a neural network as an implicit function approximator for the information gain field and 2) combine the implicit fine-grained representation with coarse volumetric representations to improve efficiency. Further with the improved efficiency, we propose a novel informative path planning based on a graph-based planner. Our method demonstrates significant improvements in the reconstruction quality and planning efficiency compared with autonomous reconstructions with implicit and explicit representations. We deploy the method on a real UAV and the results show that our method can plan informative views and reconstruct a scene with high quality.

[1]  ImmFusion: Robust mmWave-RGB Fusion for 3D Human Body Reconstruction in All Weather Conditions , 2022, 2023 IEEE International Conference on Robotics and Automation (ICRA).

[2]  H. Su,et al.  Real-Time Prediction of TBM Driving Parameters Using Geological and Operation Data , 2022, IEEE/ASME Transactions on Mechatronics.

[3]  Shibo He,et al.  NeurAR: Neural Uncertainty for Autonomous 3D Reconstruction With Implicit Neural Representations , 2022, IEEE Robotics and Automation Letters.

[4]  Pratul P. Srinivasan,et al.  Block-NeRF: Scalable Large Scene Neural View Synthesis , 2022, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[5]  Soohwan Song,et al.  View Path Planning via Online Multiview Stereo for 3-D Modeling of Large-Scale Structures , 2022, IEEE Transactions on Robotics.

[6]  T. Müller,et al.  Instant neural graphics primitives with a multiresolution hash encoding , 2022, ACM Trans. Graph..

[7]  Martin R. Oswald,et al.  NICE-SLAM: Neural Implicit Scalable Encoding for SLAM , 2021, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[8]  Pratul P. Srinivasan,et al.  Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields , 2021, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[9]  Ruben Mascaro,et al.  Informed Sampling Exploration Path Planner for 3D Reconstruction of Large Scenes , 2021, IEEE Robotics and Automation Letters.

[10]  Edgar Sucar,et al.  iMAP: Implicit Mapping and Positioning in Real-Time , 2021, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).

[11]  E. Mouaddib,et al.  Next-Best-View planning for surface reconstruction of large-scale 3D environments with multiple UAVs , 2020, 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[12]  Boyu Zhou,et al.  FUEL: Fast UAV Exploration Using Incremental Frontier Structure and Hierarchical Planning , 2020, IEEE Robotics and Automation Letters.

[13]  Hujun Bao,et al.  Mobile3DRecon: Real-time Monocular 3D Reconstruction on a Mobile Phone , 2020, IEEE Transactions on Visualization and Computer Graphics.

[14]  Jonathan T. Barron,et al.  NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections , 2020, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[15]  Toby Sharp,et al.  The Phong Surface: Efficient 3D Model Fitting using Lifted Optimization , 2020, ECCV.

[16]  Sungho Jo,et al.  Active 3D Modeling via Online Multi-View Stereo , 2020, 2020 IEEE International Conference on Robotics and Automation (ICRA).

[17]  Pratul P. Srinivasan,et al.  NeRF , 2020, ECCV.

[18]  Juan I. Nieto,et al.  An Efficient Sampling-Based Method for Online Informative Path Planning in Unknown Environments , 2019, IEEE Robotics and Automation Letters.

[19]  Fredrik Heintz,et al.  Efficient Autonomous Exploration Planning of Large-Scale 3-D Environments , 2019, IEEE Robotics and Automation Letters.

[20]  Sungho Jo,et al.  Surface-Based Exploration for Autonomous 3D Modeling , 2018, 2018 IEEE International Conference on Robotics and Automation (ICRA).

[21]  Yasutaka Furukawa,et al.  FloorNet: A Unified Framework for Floorplan Reconstruction from 3D Scans , 2018, ECCV.

[22]  Nicolas Pugeault,et al.  Taking the Scenic Route to 3D: Optimising Reconstruction from Moving Cameras , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[23]  Anders Grunnet-Jepsen,et al.  Intel RealSense Stereoscopic Depth Cameras , 2017, CVPR 2017.

[24]  Davide Scaramuzza,et al.  An information gain formulation for active volumetric 3D reconstruction , 2016, 2016 IEEE International Conference on Robotics and Automation (ICRA).

[25]  Roland Siegwart,et al.  Receding Horizon "Next-Best-View" Planner for 3D Exploration , 2016, 2016 IEEE International Conference on Robotics and Automation (ICRA).

[26]  Daniel Cohen-Or,et al.  Quality-driven poisson-guided autoscanning , 2014, ACM Trans. Graph..

[27]  Andrew W. Fitzgibbon,et al.  KinectFusion: Real-time dense surface mapping and tracking , 2011, 2011 10th IEEE International Symposium on Mixed and Augmented Reality.

[28]  G. Roth,et al.  View planning for automated three-dimensional object reconstruction and inspection , 2003, CSUR.

[29]  Gim Hee Lee,et al.  NeurAR: Neural Uncertainty for Autonomous 3D Reconstruction , 2022, arXiv.org.