Volumetric reconstruction applied to perceptual studies of size and weight

We explore the application of volumetric reconstruction from structured-light sensors in cognitive neuroscience, specifically in the quantification of the size-weight illusion, whereby humans tend to systematically perceive smaller objects as heavier. We investigate the performance of two commercial structured-light scanning systems in comparison to one we developed specifically for this application. Our method has two main distinct features: First, it only samples a sparse series of viewpoints, unlike other systems such as the Kinect Fusion. Second, instead of building a distance field for the purpose of points-to-surface conversion directly, we pursue a first-order approach: the distance function is recovered from its gradient by a screened Poisson reconstruction, which is very resilient to noise and yet preserves high-frequency signal components. Our experiments show that the quality of metric reconstruction from structured light sensors is subject to systematic biases, and highlights the factors that influence it. Our main performance index rates estimates of volume (a proxy of size), for which we review a well-known formula applicable to incomplete meshes. Our code and data will be made publicly available upon completion of the anonymous review process.

[1]  Michael M. Kazhdan,et al.  Screened poisson surface reconstruction , 2013, TOGS.

[2]  Brian Mirtich,et al.  Fast and Accurate Computation of Polyhedral Mass Properties , 1996, J. Graphics, GPU, & Game Tools.

[3]  M. Ernst,et al.  Humans integrate visual and haptic information in a statistically optimal fashion , 2002, Nature.

[4]  Juho Kannala,et al.  Joint Depth and Color Camera Calibration with Distortion Correction , 2012, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[5]  Robert C. Bolles,et al.  Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography , 1981, CACM.

[6]  Gary K. L. Tam,et al.  Registration of 3D Point Clouds and Meshes: A Survey from Rigid to Nonrigid , 2013, IEEE Transactions on Visualization and Computer Graphics.

[7]  Dennis DeTurck,et al.  Vector Calculus and the Topology of Domains in 3-Space , 2002, Am. Math. Mon..

[8]  David G. Lowe,et al.  Object recognition from local scale-invariant features , 1999, Proceedings of the Seventh IEEE International Conference on Computer Vision.

[9]  Paul J. Besl,et al.  A Method for Registration of 3-D Shapes , 1992, IEEE Trans. Pattern Anal. Mach. Intell..

[10]  Michael M. Kazhdan,et al.  Poisson surface reconstruction , 2006, SGP '06.

[11]  Hans-Peter Seidel,et al.  An efficient construction of reduced deformable objects , 2013, ACM Trans. Graph..

[12]  Dieter Fox,et al.  RGB-D mapping: Using Kinect-style depth cameras for dense 3D modeling of indoor environments , 2012, Int. J. Robotics Res..

[13]  J. Flanagan,et al.  Independence of perceptual and sensorimotor predictions in the size–weight illusion , 2000, Nature Neuroscience.

[14]  I ROCK,et al.  Vision and Touch: An Experimentally Created Conflict between the Two Senses , 1964, Science.

[15]  William E. Lorensen,et al.  Marching cubes: a high resolution 3D surface construction algorithm , 1996 .

[16]  John A. Williams,et al.  Simultaneous Registration of Multiple Corresponding Point Sets , 2001, Comput. Vis. Image Underst..

[17]  W. E. Dawson,et al.  The effect of object shape and mode of presentation on judgments of apparent volume , 1981, Perception & psychophysics.

[18]  Andrew W. Fitzgibbon,et al.  KinectFusion: Real-time dense surface mapping and tracking , 2011, 2011 10th IEEE International Symposium on Mixed and Augmented Reality.

[19]  M. Goodale Transforming vision into action , 2011, Vision Research.

[20]  D. Westwood,et al.  Opposite perceptual and sensorimotor responses to a size-weight illusion. , 2006, Journal of neurophysiology.