Visual Location Recognition Based on Coarse-to-Fine Image Retrieval and Epipolar Geometry Constraint for Urban Environment

Visual based location recognition of a mobile device is an important problem in many applications, such as visual navigation, auto-piloted driving and augmented reality. In this paper, a visual location recognition system based on the Coarse-to-Fine image retrieval and the epipolar geometry constraint is proposed. The basic idea of this system is to match a user captured image against some geo-tagged images in the database, and then estimate the user's location by the epipolar geometry constraint. The process of the Coarse-to-Fine image retrieval is necessary to select some database images in the same scene with the user captured image. The epipolar geometry constraint is utilized to determine the refined location using the geographical location information of the database images. The specific experiments for the visual location recognition are performed and the results show that this system can achieve the excellent performance of the location recognition.

[1]  Oscar Firschein,et al.  Readings in computer vision: issues, problems, principles, and paradigms , 1987 .

[2]  Shahrokh Valaee,et al.  A weighted KNN epipolar geometry-based approach for vision-based indoor localization using smartphone cameras , 2014, 2014 IEEE 8th Sensor Array and Multichannel Signal Processing Workshop (SAM).

[3]  David Nistér,et al.  An efficient solution to the five-point relative pose problem , 2004, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[4]  D. T. Lee,et al.  Scene Location Guide by Image-Based Retrieval , 2010, MMM.

[5]  Jae-Bok Song,et al.  Monocular Vision-Based SLAM in Indoor Environment Using Corner, Lamp, and Door Features From Upward-Looking Camera , 2011, IEEE Transactions on Industrial Electronics.

[6]  Eckehard G. Steinbach,et al.  Rapid image retrieval for mobile location recognition , 2011, 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[7]  H. Temeltas,et al.  SLAM for robot navigation , 2008, IEEE Aerospace and Electronic Systems Magazine.

[8]  H. C. Longuet-Higgins,et al.  A computer algorithm for reconstructing a scene from two projections , 1981, Nature.

[9]  Avideh Zakhor,et al.  Image Based Localization in Indoor Environments , 2013, 2013 Fourth International Conference on Computing for Geospatial Research and Application.

[10]  Rodrigo Munguía,et al.  Closing Loops With a Virtual Sensor Based on Monocular SLAM , 2009, IEEE Transactions on Instrumentation and Measurement.

[11]  Jiri Matas,et al.  Matching with PROSAC - progressive sample consensus , 2005, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05).

[12]  Gary R. Bradski,et al.  ORB: An efficient alternative to SIFT or SURF , 2011, 2011 International Conference on Computer Vision.

[13]  Rina Panigrahy,et al.  An Improved Algorithm Finding Nearest Neighbor Using Kd-trees , 2008, LATIN.

[14]  Richard Szeliski,et al.  City-Scale Location Recognition , 2007, 2007 IEEE Conference on Computer Vision and Pattern Recognition.

[15]  Antonio Torralba,et al.  Modeling the Shape of the Scene: A Holistic Representation of the Spatial Envelope , 2001, International Journal of Computer Vision.