Spatial Uncertainty Model for Visual Features Using a Kinect™ Sensor

This study proposes a mathematical uncertainty model for the spatial measurement of visual features using Kinect™ sensors. This model can provide qualitative and quantitative analysis for the utilization of Kinect™ sensors as 3D perception sensors. In order to achieve this objective, we derived the propagation relationship of the uncertainties between the disparity image space and the real Cartesian space with the mapping function between the two spaces. Using this propagation relationship, we obtained the mathematical model for the covariance matrix of the measurement error, which represents the uncertainty for spatial position of visual features from Kinect™ sensors. In order to derive the quantitative model of spatial uncertainty for visual features, we estimated the covariance matrix in the disparity image space using collected visual feature data. Further, we computed the spatial uncertainty information by applying the covariance matrix in the disparity image space and the calibrated sensor parameters to the proposed mathematical model. This spatial uncertainty model was verified by comparing the uncertainty ellipsoids for spatial covariance matrices and the distribution of scattered matching visual features. We expect that this spatial uncertainty model and its analyses will be useful in various Kinect™ sensor applications.

[1]  Tomás Pajdla,et al.  3D with Kinect , 2011, 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops).

[2]  Sander Oude Elberink,et al.  Accuracy and Resolution of Kinect Depth Data for Indoor Mapping Applications , 2012, Sensors.

[3]  Luc Van Gool,et al.  Speeded-Up Robust Features (SURF) , 2008, Comput. Vis. Image Underst..

[4]  Patrick Benavidez,et al.  Mobile robot navigation and target tracking system , 2011, 2011 6th International Conference on System of Systems Engineering.

[5]  Jean-Yves Bouguet,et al.  Camera calibration toolbox for matlab , 2001 .

[6]  Fabio Menna,et al.  Geometric investigation of a gaming active device , 2011, Optical Metrology.

[7]  Ji-Hun Bae,et al.  A Data Fitting Technique for Rational Function Models Using the LM Optimization Algorithm , 2011 .

[8]  Yong-Deuk Shin,et al.  Extracting extrinsic parameters of a laser scanner and a camera using EM , 2009, 2009 ICCAS-SICE.

[9]  Andrew Zisserman,et al.  Multiple View Geometry in Computer Vision (2nd ed) , 2003 .

[10]  Brandi House,et al.  Increased Automation in Stereo Camera Calibration Techniques , 2006 .

[11]  Juho Kannala,et al.  Accurate and Practical Calibration of a Depth and Color Camera Pair , 2011, CAIP.

[12]  Christopher Hunt,et al.  Notes on the OpenSURF Library , 2009 .

[13]  Bart Selman,et al.  Human Activity Detection from RGBD Images , 2011, Plan, Activity, and Intent Recognition.

[14]  Leon Garcia,et al.  Probability and Random Processes for Electrical Engineering , 1993 .

[15]  Ji-Hun Bae,et al.  A study on reliability enhancement for laser and camera calibration , 2012 .

[16]  Jong-Eun Ha,et al.  Extrinsic calibration of a camera and laser range finder using a new calibration structure of a plane with a triangular hole , 2012 .

[17]  C. J. Taylor,et al.  Segmentation and Analysis of RGB-D data , 2010 .

[18]  Albert S. Huang,et al.  Visual Odometry and Mapping for Autonomous Flight Using an RGB-D Camera , 2011, ISRR.

[19]  Peter Hubinský,et al.  The Kinect Sensor in Robotics Education , 2011 .

[20]  Zhengyou Zhang,et al.  A Flexible New Technique for Camera Calibration , 2000, IEEE Trans. Pattern Anal. Mach. Intell..

[21]  Pietro Perona,et al.  Camera Calibration from Points and Lines in Dual-space Geometry , 2007 .

[22]  Robert Pless,et al.  Extrinsic Auto-calibration of a Camera and Laser Range Finder , 2003 .

[23]  Bernhard P. Wrobel,et al.  Multiple View Geometry in Computer Vision , 2001 .