Mobile Robot Indoor Positioning Based on a Combination of Visual and Inertial Sensors

Multi-sensor integrated navigation technology has been applied to the indoor navigation and positioning of robots. For the problems of a low navigation accuracy and error accumulation, for mobile robots with a single sensor, an indoor mobile robot positioning method based on a visual and inertial sensor combination is presented in this paper. First, the visual sensor (Kinect) is used to obtain the color image and the depth image, and feature matching is performed by the improved scale-invariant feature transform (SIFT) algorithm. Then, the absolute orientation algorithm is used to calculate the rotation matrix and translation vector of a robot in two consecutive frames of images. An inertial measurement unit (IMU) has the advantages of high frequency updating and rapid, accurate positioning, and can compensate for the Kinect speed and lack of precision. Three-dimensional data, such as acceleration, angular velocity, magnetic field strength, and temperature data, can be obtained in real-time with an IMU. The data obtained by the visual sensor is loosely combined with that obtained by the IMU, that is, the differences in the positions and attitudes of the two sensor outputs are optimally combined by the adaptive fade-out extended Kalman filter to estimate the errors. Finally, several experiments show that this method can significantly improve the accuracy of the indoor positioning of the mobile robots based on the visual and inertial sensors.

[1]  Shuxiang Guo,et al.  A Kinect-Based Real-Time Compressive Tracking Prototype System for Amphibious Spherical Robots , 2015, Sensors.

[2]  Dongseok Ryu,et al.  Multiple Intensity Differentiation for 3-D Surface Reconstruction With Mono-Vision Infrared Proximity Array Sensor , 2011, IEEE Sensors Journal.

[3]  Nuno Lau,et al.  Using a Depth Camera for Indoor Robot Localization and Navigation , 2011 .

[4]  Jan-Michael Frahm,et al.  A Comparative Analysis of RANSAC Techniques Leading to Adaptive Real-Time Random Sample Consensus , 2008, ECCV.

[5]  J. Bortz A New Mathematical Formulation for Strapdown Inertial Navigation , 1971, IEEE Transactions on Aerospace and Electronic Systems.

[6]  YU Rui-xing Image Matching Method Based on Improved SIFT Algorithm , 2011 .

[7]  Arnold W. M. Smeulders,et al.  The Amsterdam Library of Object Images , 2004, International Journal of Computer Vision.

[8]  Yong Li,et al.  Time Synchronization Error and Calibration in Integrated GPS/INS Systems , 2008 .

[9]  Olivier Stasse,et al.  MonoSLAM: Real-Time Single Camera SLAM , 2007, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[10]  Michael Hayes,et al.  Altitude control of a quadrotor helicopter using depth map from Microsoft Kinect sensor , 2011, 2011 IEEE International Conference on Mechatronics.

[11]  Aly A. Farag,et al.  CSIFT: A SIFT Descriptor with Color Invariant Characteristics , 2006, 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06).

[12]  Robin B. Miller A new strapdown attitude algorithm , 1983 .

[13]  V Sazdovski,et al.  Inertial Navigation Aided by Vision-Based Simultaneous Localization and Mapping , 2011, IEEE Sensors Journal.

[14]  J. Mark,et al.  Extension of strapdown attitude algorithm for high-frequency base motion , 1988 .

[15]  G LoweDavid,et al.  Distinctive Image Features from Scale-Invariant Keypoints , 2004 .

[16]  Tianmiao Wang,et al.  Monocular vision and IMU based navigation for a small unmanned helicopter , 2012, 2012 7th IEEE Conference on Industrial Electronics and Applications (ICIEA).