Real-time location estimation for indoor navigation using a visual-inertial sensor

The purpose of this study is to use visual and inertial sensors to achieve real-time location. How to provide an accurate location has become a popular research topic in the field of indoor navigation. Although the complementarity of vision and inertia has been widely applied in indoor navigation, many problems remain, such as inertial sensor deviation calibration, unsynchronized visual and inertial data acquisition and large amount of stored data.,First, this study demonstrates that the vanishing point (VP) evaluation function improves the precision of extraction, and the nearest ground corner point (NGCP) of the adjacent frame is estimated by pre-integrating the inertial sensor. The Sequential Similarity Detection Algorithm (SSDA) and Random Sample Consensus (RANSAC) algorithms are adopted to accurately match the adjacent NGCP in the estimated region of interest. Second, the model of visual pose is established by using the parameters of the camera itself, VP and NGCP. The model of inertial pose is established by pre-integrating. Third, location is calculated by fusing the model of vision and inertia.,In this paper, a novel method is proposed to fuse visual and inertial sensor to locate indoor environment. The authors describe the building of an embedded hardware platform to the best of their knowledge and compare the result with a mature method and POSAV310.,This paper proposes a VP evaluation function that is used to extract the most advantages in the intersection of a plurality of parallel lines. To improve the extraction speed of adjacent frame, the authors first proposed fusing the NGCP of the current frame and the calibrated pre-integration to estimate the NGCP of the next frame. The visual pose model was established using extinction VP and NGCP, calibration of inertial sensor. This theory offers the linear processing equation of gyroscope and accelerometer by the model of visual and inertial pose.

[1]  Juan D. Tardós,et al.  ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras , 2016, IEEE Transactions on Robotics.

[2]  Yunhui Liu,et al.  Estimating Position of Mobile Robots From Omnidirectional Vision Using an Adaptive Algorithm , 2015, IEEE Transactions on Cybernetics.

[3]  Takeo Kanade,et al.  IMU Self-Calibration Using Factorization , 2013, IEEE Transactions on Robotics.

[4]  Salah Sukkarieh,et al.  Visual-Inertial-Aided Navigation for High-Dynamic Motion in Built Environments Without Initial Conditions , 2012, IEEE Transactions on Robotics.

[5]  Mac Schwager,et al.  Vision-Based Distributed Formation Control Without an External Positioning System , 2016, IEEE Transactions on Robotics.

[6]  Shaojie Shen,et al.  Monocular Visual–Inertial State Estimation With Online Initialization and Camera–IMU Extrinsic Calibration , 2017, IEEE Transactions on Automation Science and Engineering.

[7]  Rafik Mebarki,et al.  Nonlinear Visual Control of Unmanned Aerial Vehicles in GPS-Denied Environments , 2015, IEEE Transactions on Robotics.

[8]  Frank Dellaert,et al.  On-Manifold Preintegration for Real-Time Visual--Inertial Odometry , 2015, IEEE Transactions on Robotics.

[9]  Mário Serafim Nunes,et al.  Space–use analysis through computer vision , 2015 .

[10]  Gamini Dissanayake,et al.  Convergence and Consistency Analysis for a 3-D Invariant-EKF SLAM , 2017, IEEE Robotics and Automation Letters.

[11]  Tong Heng Lee,et al.  Vision-aided Estimation of Attitude, Velocity, and Inertial Measurement Bias for UAV Stabilization , 2016, J. Intell. Robotic Syst..

[12]  Rong Xiong,et al.  Stereo Visual-Inertial Odometry With Multiple Kalman Filters Ensemble , 2016, IEEE Transactions on Industrial Electronics.

[13]  R. Siegwart,et al.  Camera/IMU Calibration Revisited , 2017, IEEE Sensors Journal.

[14]  Michael Bosse,et al.  Keyframe-based visual–inertial odometry using nonlinear optimization , 2015, Int. J. Robotics Res..

[15]  Hui Wei,et al.  Visual Navigation Using Projection of Spatial Right-Angle In Indoor Environment , 2018, IEEE Transactions on Image Processing.

[16]  Zhao Yan,et al.  Monocular measurement for object-s attitude based on vanishing point theory , 2015 .

[17]  Massimo Bertozzi,et al.  GOLD: a parallel real-time stereo vision system for generic obstacle and lane detection , 1998, IEEE Trans. Image Process..

[18]  Qiang Zhou,et al.  Content-Based Image Retrieval Based on ROI Detection and Relevance Feedback , 2005, Multimedia Tools and Applications.

[19]  Nicholas R. Gans,et al.  Tracking Control of Mobile Robots Localized via Chained Fusion of Discrete and Continuous Epipolar Geometry, IMU and Odometry , 2013, IEEE Transactions on Cybernetics.