Learning the Next Best View for 3D Point Clouds via Topological Features

In this paper, we introduce a reinforcement learning approach utilizing a novel topology-based information gain metric for directing the next best view of a noisy 3D sensor. The metric combines the disjoint sections of an observed surface to focus on high-detail features such as holes and concave sections. Experimental results show that our approach can aid in establishing the placement of a robotic sensor to optimize the information provided by its streaming point cloud data. Furthermore, a labeled dataset of 3D objects, a CAD design for a custom robotic manipulator, and software for the transformation, union, and registration of point clouds has been publicly released to the research community.

[1]  Richard Pito,et al.  A Solution to the Next Best View Problem for Automated Surface Acquisition , 1999, IEEE Trans. Pattern Anal. Mach. Intell..

[2]  Marek Sewer Kopicki,et al.  Active vision for dexterous grasping of novel objects , 2016, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[3]  Roland Siegwart,et al.  Receding Horizon "Next-Best-View" Planner for 3D Exploration , 2016, 2016 IEEE International Conference on Robotics and Automation (ICRA).

[4]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[5]  Leonidas J. Guibas,et al.  ShapeNet: An Information-Rich 3D Model Repository , 2015, ArXiv.

[6]  Michael Suppa,et al.  Efficient next-best-scan planning for autonomous 3D surface reconstruction of unknown objects , 2015, Journal of Real-Time Image Processing.

[7]  Frédéric Chazal,et al.  An Introduction to Topological Data Analysis: Fundamental and Practical Aspects for Data Scientists , 2017, Frontiers in Artificial Intelligence.

[8]  Jun Li,et al.  Mobile bin picking with an anthropomorphic service robot , 2013, 2013 IEEE International Conference on Robotics and Automation.

[9]  Richard Pito,et al.  A sensor-based solution to the "next best view" problem , 1996, Proceedings of 13th International Conference on Pattern Recognition.

[10]  Yiannis Aloimonos,et al.  Purposive and qualitative active vision , 1990, [1990] Proceedings. 10th International Conference on Pattern Recognition.

[11]  Gaurav S. Sukhatme,et al.  A probabilistic framework for next best view estimation in a cluttered environment , 2014, J. Vis. Commun. Image Represent..

[12]  Lucas Paletta,et al.  Active object recognition by view integration and reinforcement learning , 2000, Robotics Auton. Syst..

[13]  Yiannis Aloimonos,et al.  Active vision , 2004, International Journal of Computer Vision.

[14]  Shuguang Cui,et al.  Deep Reinforcement Learning of Volume-Guided Progressive View Inpainting for 3D Point Scene Completion From a Single Depth Image , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[15]  Herbert Edelsbrunner,et al.  Computational Topology - an Introduction , 2009 .

[16]  Shengyong Chen,et al.  Active vision in robotic systems: A survey of recent developments , 2011, Int. J. Robotics Res..

[17]  C. Ian Connolly,et al.  The determination of next best views , 1985, Proceedings. 1985 IEEE International Conference on Robotics and Automation.

[18]  Oliver Kroemer,et al.  Maximally informative interaction learning for scene exploration , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[19]  G. Roth,et al.  View planning for automated three-dimensional object reconstruction and inspection , 2003, CSUR.

[20]  Ruzena Bajcsy,et al.  Occlusions as a Guide for Planning the Next View , 1993, IEEE Trans. Pattern Anal. Mach. Intell..

[21]  Geoffrey E. Hinton,et al.  Rectified Linear Units Improve Restricted Boltzmann Machines , 2010, ICML.

[22]  Jacopo Aleotti,et al.  Surfel-Based Next Best View Planning , 2018, IEEE Robotics and Automation Letters.

[23]  Yong-Jin Liu,et al.  PC-NBV: A Point Cloud Based Deep Network for Efficient Next Best View Planning , 2020, 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[24]  Adrian Hilton,et al.  Geometric fusion for a hand-held 3D sensor , 2000, Machine Vision and Applications.

[25]  Paul J. Besl,et al.  A Method for Registration of 3-D Shapes , 1992, IEEE Trans. Pattern Anal. Mach. Intell..

[26]  Mark E. Campbell,et al.  An Adaptable, Probabilistic, Next-Best View Algorithm for Reconstruction of Unknown 3-D Objects , 2017, IEEE Robotics and Automation Letters.

[27]  Shengyong Chen,et al.  Active Sensor Planning for Multiview Vision Tasks , 2008 .

[28]  Nikolaos Papanikolopoulos,et al.  A topology-based descriptor for 3D point cloud modeling: Theory and experiments , 2019, Image Vis. Comput..

[29]  H. Edelsbrunner,et al.  Persistent Homology — a Survey , 2022 .

[30]  Héctor H. González-Baños,et al.  Navigation Strategies for Exploring Indoor Environments , 2002, Int. J. Robotics Res..