Adopting Feature-Based Visual Odometry for Resource-Constrained Mobile Devices

In many practical applications of mobile devices self-localization of the user in a GPS-denied indoor environment is required. Among the available approaches the visual odometry concept enables continuous, precise egomotion estimation in previously unknown environments. In this paper we examine the usual pipeline of a monocular visual odometry system, identifying the bottlenecks and demonstrating how to circumvent the resource constrains, to implement a real-time visual odometry system on a smartphone or tablet.

[1]  Sebastian Thrun,et al.  Sub-meter indoor localization in unmodified environments with inexpensive sensors , 2010, 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[2]  Marek Kraft,et al.  Comparative assessment of point feature detectors in the context of robot navigation , 2013 .

[3]  Hongdong Li,et al.  Five-Point Motion Estimation Made Easy , 2006, 18th International Conference on Pattern Recognition (ICPR'06).

[4]  Michal R. Nowicki,et al.  Performance comparison of point feature detectors and descriptors for visual navigation on Android platform , 2014, 2014 International Wireless Communications and Mobile Computing Conference (IWCMC).

[5]  Davide Scaramuzza,et al.  SVO: Fast semi-direct monocular visual odometry , 2014, 2014 IEEE International Conference on Robotics and Automation (ICRA).

[6]  Xin Yang,et al.  LDB: An ultra-fast feature for scalable Augmented Reality on mobile devices , 2012, 2012 IEEE International Symposium on Mixed and Augmented Reality (ISMAR).

[7]  Wolfram Burgard,et al.  A benchmark for the evaluation of RGB-D SLAM systems , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[8]  Tom Drummond,et al.  Machine Learning for High-Speed Corner Detection , 2006, ECCV.

[9]  Antti Oulasvirta,et al.  Computer Vision – ECCV 2006 , 2006, Lecture Notes in Computer Science.

[10]  Friedrich Fraundorfer,et al.  Visual Odometry Part I: The First 30 Years and Fundamentals , 2022 .

[11]  Vincent Lepetit,et al.  BRIEF: Binary Robust Independent Elementary Features , 2010, ECCV.

[12]  David G. Lowe,et al.  Distinctive Image Features from Scale-Invariant Keypoints , 2004, International Journal of Computer Vision.

[13]  Simon Baker,et al.  Lucas-Kanade 20 Years On: A Unifying Framework , 2004, International Journal of Computer Vision.

[14]  David W. Murray,et al.  Parallel Tracking and Mapping on a camera phone , 2009, 2009 8th IEEE International Symposium on Mixed and Augmented Reality.

[15]  James R. Bergen,et al.  Visual odometry , 2004, Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004..

[16]  Luc Van Gool,et al.  SURF: Speeded Up Robust Features , 2006, ECCV.

[17]  Lorenzo Porzi,et al.  Visual-inertial tracking on Android for Augmented Reality applications , 2012, 2012 IEEE Workshop on Environmental Energy and Structural Monitoring Systems (EESMS).

[18]  Luc Van Gool,et al.  Speeded-Up Robust Features (SURF) , 2008, Comput. Vis. Image Underst..

[19]  Thomas Deselaers,et al.  ClassCut for Unsupervised Class Segmentation , 2010, ECCV.

[20]  F. Fraundorfer,et al.  Visual Odometry : Part II: Matching, Robustness, Optimization, and Applications , 2012, IEEE Robotics & Automation Magazine.