Visual and tactile 3D point cloud data from real robots for shape modeling and completion

Representing 3D geometry for different tasks, e.g. rendering and reconstruction, is an important goal in different fields, such as computer graphics, computer vision and robotics. Robotic applications often require perception of object shape information extracted from sensory data that can be noisy and incomplete. This is a challenging task and in order to facilitate analysis of new methods and comparison of different approaches for shape modeling (e.g. surface estimation), completion and exploration, we provide real sensory data acquired from exploring various objects of different complexities. The dataset includes visual and tactile readings in the form of 3D point clouds obtained using two different robot setups that are equipped with visual and tactile sensors. During data collection, the robots touch the experiment objects in a predefined manner at various exploration configurations and gather visual and tactile points in the same coordinate frame based on calibration between the robots and the used cameras. The goal of this exhaustive exploration procedure is to sense unseen parts of the objects which are not visible to the cameras, but can be sensed via tactile sensors activated at touched areas. The data was used for shape completion and modeling via Implicit Surface representation and Gaussian-Process-based regression, in the work “Object shape estimation and modeling, based on sparse Gaussian process implicit surfaces, combining visual data and tactile exploration” [3], and also used partially in “Enhancing visual perception of shape through tactile glances” [4], both studying efficient exploration of objects to reduce number of touches.

[1]  Marc Toussaint,et al.  Gaussian process implicit surfaces for shape estimation and grasping , 2011, 2011 IEEE International Conference on Robotics and Automation.

[2]  Pierre Alliez,et al.  A Survey of Surface Reconstruction from Point Clouds , 2017, Comput. Graph. Forum.

[3]  Paul J. Besl,et al.  A Method for Registration of 3-D Shapes , 1992, IEEE Trans. Pattern Anal. Mach. Intell..

[4]  Raphael Grimm,et al.  Visuo-Haptic Grasping of Unknown Objects based on Gaussian Process Implicit Surfaces and Deep Learning , 2019, 2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids).

[5]  Richard A. Newcombe,et al.  DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[6]  Danica Kragic,et al.  Active 3D Segmentation through Fixation of Previously Unseen Objects , 2010, BMVC.

[7]  Marc Toussaint,et al.  Uncertainty aware grasping and tactile exploration , 2013, 2013 IEEE International Conference on Robotics and Automation.

[8]  Bohn Stafleu van Loghum,et al.  Online … , 2002, LOG IN.

[9]  Danica Kragic,et al.  Data-Driven Grasp Synthesis—A Survey , 2013, IEEE Transactions on Robotics.

[10]  Tsuyoshi Murata,et al.  {m , 1934, ACML.

[11]  Mårten Björkman,et al.  Object shape estimation and modeling, based on sparse Gaussian process implicit surfaces, combining visual data and tactile exploration , 2020, Robotics Auton. Syst..

[12]  Danica Kragic,et al.  Enhancing visual perception of shape through tactile glances , 2013, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[13]  Martial Hebert,et al.  PCN: Point Completion Network , 2018, 2018 International Conference on 3D Vision (3DV).

[14]  Danica Kragic,et al.  Trends and challenges in robot manipulation , 2019, Science.