Indoor Localisation and Navigation on Augmented Reality Devices

We present a novel indoor mapping and localisation approach for Augmented Reality (AR) devices that exploits the fusion of inertial sensors with visual odometry. We have demonstrated the approach using Google Glass (GG) and Google Cardboard (GC) supported with an Android phone. Our work presents an application of Extended Kalman Filter (EKF) for sensor fusion for AR based application where previous work on Bag of Visual Words Pairs (BoVWP) [10] based image matching is used for bundle adjustment on Fused odometry. We present the empirical validation of this approach on three different indoor spaces in an office environment. We concluded that vision complimented with inertial data effectively compensate the ego-motion of the user, improving the accuracy of map generation and localisation.

[1]  Tomohiro Shibata,et al.  High performance loop closure detection using bag of word pairs , 2016, Robotics Auton. Syst..

[2]  Gordon Wyeth,et al.  Towards training-free appearance-based localization: Probabilistic models for whole-image descriptors , 2014, 2014 IEEE International Conference on Robotics and Automation (ICRA).

[3]  Michael Milford,et al.  Condition-invariant, top-down visual place recognition , 2014, 2014 IEEE International Conference on Robotics and Automation (ICRA).

[4]  Paul Lukowicz,et al.  In the blink of an eye: combining head motion and eye blink frequency for activity recognition with Google Glass , 2014, AH.

[5]  Ruzena Bajcsy,et al.  Precise indoor localization using smart phones , 2010, ACM Multimedia.

[6]  Avideh Zakhor,et al.  Image Based Localization in Indoor Environments , 2013, 2013 Fourth International Conference on Computing for Geospatial Research and Application.

[7]  Carlo Tomasi,et al.  Good features to track , 1994, 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[8]  Avideh Zakhor,et al.  Indoor localization and visualization using a human-operated backpack system , 2010, 2010 International Conference on Indoor Positioning and Indoor Navigation.

[9]  Peter Corke,et al.  An Introduction to Inertial and Visual Sensing , 2007, Int. J. Robotics Res..

[10]  Frédéric Lerasle,et al.  A visual landmark framework for indoor mobile robot navigation , 2002, Proceedings 2002 IEEE International Conference on Robotics and Automation (Cat. No.02CH37292).

[11]  Joo-Hwee Lim,et al.  A Wearable Face Recognition System on Google Glass for Assisting Social Interactions , 2014, ACCV Workshops.

[12]  Jorge Dias,et al.  Relative Pose Calibration Between Visual and Inertial Sensors , 2007, Int. J. Robotics Res..

[13]  Joachim Weickert,et al.  Lucas/Kanade Meets Horn/Schunck: Combining Local and Global Optic Flow Methods , 2005, International Journal of Computer Vision.

[14]  Thomas B. Schön,et al.  Robust real-time tracking by fusing measurements from inertial and vision sensors , 2007, Journal of Real-Time Image Processing.

[15]  Oliver J. Woodman,et al.  An introduction to inertial navigation , 2007 .

[16]  Robert Harle,et al.  Pedestrian localisation for indoor environments , 2008, UbiComp.

[17]  Liviu Iftode,et al.  Indoor Localization Using Camera Phones , 2006, Seventh IEEE Workshop on Mobile Computing Systems & Applications (WMCSA'06 Supplement).

[18]  Zhihan Lv,et al.  Hand-free motion interaction on Google Glass , 2014, SIGGRAPH ASIA Mobile Graphics and Interactive Applications.

[19]  Didier Stricker,et al.  Advanced tracking through efficient image processing and visual-inertial sensor fusion , 2008, 2008 IEEE Virtual Reality Conference.

[20]  Feng Zhao,et al.  A reliable and accurate indoor localization method using phone inertial sensors , 2012, UbiComp.