Robust Visual Compass Using Hybrid Features for Indoor Environments

Orientation estimation is a crucial part of robotics tasks such as motion control, autonomous navigation, and 3D mapping. In this paper, we propose a robust visual-based method to estimate robots’ drift-free orientation with RGB-D cameras. First, we detect and track hybrid features (i.e., plane, line, and point) from color and depth images, which provides reliable constraints even in uncharacteristic environments with low texture or no consistent lines. Then, we construct a cost function based on these features and, by minimizing this function, we obtain the accurate rotation matrix of each captured frame with respect to its reference keyframe. Furthermore, we present a vanishing direction-estimation method to extract the Manhattan World (MW) axes; by aligning the current MW axes with the global MW axes, we refine the aforementioned rotation matrix of each keyframe and achieve drift-free orientation. Experiments on public RGB-D datasets demonstrate the robustness and accuracy of the proposed algorithm for orientation estimation. In addition, we have applied our proposed visual compass to pose estimation, and the evaluation on public sequences shows improved accuracy.

[1]  Yang Liu,et al.  Automatic Estimation of Dynamic Lever Arms for a Position and Orientation System , 2018, Sensors.

[2]  Baigen Cai,et al.  IMU-Assisted 2D SLAM Method for Low-Texture and Dynamic Environments , 2018, Applied Sciences.

[3]  Sylvie Treuillet,et al.  Real-time camera orientation estimation based on vanishing point tracking under Manhattan World assumption , 2014, Journal of Real-Time Image Processing.

[4]  Rafael Grompone von Gioi,et al.  LSD: a Line Segment Detector , 2012, Image Process. Line.

[5]  Daniel Cremers,et al.  Direct Sparse Odometry , 2016, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[6]  David Valiente,et al.  Improved Omnidirectional Odometry for a View-Based Mapping Approach , 2017, Sensors.

[7]  Jianwei Li,et al.  Robust and Efficient CPU-Based RGB-D Scene Reconstruction , 2018, Sensors.

[8]  Guy Rosman,et al.  The Manhattan Frame Model—Manhattan World Inference in the Space of Surface Normals , 2018, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[9]  Juan D. Tardós,et al.  ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras , 2016, IEEE Transactions on Robotics.

[10]  Qingquan Li,et al.  Indoor Topological Localization Using a Visual Landmark Sequence , 2019, Remote. Sens..

[11]  Luis Payá,et al.  Estimating the position and orientation of a mobile robot with respect to a trajectory using omnidirectional imaging and global appearance , 2017, PloS one.

[12]  Reinhard Koch,et al.  An efficient and robust line segment matching approach based on LBD descriptor and pairwise geometric consistency , 2013, J. Vis. Commun. Image Represent..

[13]  Kaichang Di,et al.  Improved Point-Line Feature Based Visual SLAM Method for Indoor Scenes , 2018, Sensors.

[14]  Taejung Kim,et al.  Development of Stereo Visual Odometry Based on Photogrammetric Feature Optimization , 2019, Remote. Sens..

[15]  Tianmin Sun,et al.  Steering Stability Control for a Four Hub-Motor Independent-Drive Electric Vehicle with Varying Adhesion Coefficient , 2018 .

[16]  Matthias Becker,et al.  Benefits of Multi-Constellation/Multi-Frequency GNSS in a Tightly Coupled GNSS/IMU/Odometry Integration Algorithm † , 2018, Sensors.

[17]  José María Cañas,et al.  SDVL: Efficient and Accurate Semi-Direct Visual Localization , 2019, Sensors.