A Practical Map Needs Direct Visual Odometry

As the fundamental building blocks for many emerging technologies – from autonomous cars and UAVs to virtual and augmented reality, real-time methods for SLAM and VO have made significant progress. For a long time, the field was dominated by feature-based methods, which are sensitive to environment textures [1]. In addition, the sparse maps constructed by such methods are insufficient for further applications, such as navigation. In recent years, a number of direct formulations have become popular. By directly operating on the raw pixel intensity, it is believed by some researchers [2] that the main limitation of direct methods is their reliance on the consistent appearance between the matched pixels, which is seldom satisfied in robotic applications. To tackle challenging illumination conditions is a long-term big issue for direct VO. Some methods focus on improving the robustness to illumination.

[1]  Alois Knoll,et al.  Efficient compositional approaches for real-time robust direct visual odometry from RGB-D data , 2013, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[2]  Brett Browning,et al.  Direct Visual Odometry in Low Light Using Binary Descriptors , 2017, IEEE Robotics and Automation Letters.

[3]  J. M. M. Montiel,et al.  ORB-SLAM: A Versatile and Accurate Monocular SLAM System , 2015, IEEE Transactions on Robotics.