Self-Localization at Street Intersections

There is growing interest among smartphone users in the ability to determine their precise location in their environment for a variety of applications related to way finding, travel and shopping. While GPS provides valuable self-localization estimates, its accuracy is limited to approximately 10 meters in most urban locations. This paper focuses on the self-localization needs of blind or visually impaired travelers, who are faced with the challenge of negotiating street intersections. These travelers need more precise self-localization to help them align themselves properly to crosswalks, signal lights and other features such as walk light pushbuttons. We demonstrate a novel computer vision-based localization approach that is tailored to the street intersection domain. Unlike most work on computer vision-based localization techniques, which typically assume the presence of detailed, high-quality 3D models of urban environments, our technique harnesses the availability of simple, ubiquitous satellite imagery (e.g., Google Maps) to create simple maps of each intersection. Not only does this technique scale naturally to the great majority of street intersections in urban areas, but it has the added advantage of incorporating the specific metric information that blind or visually impaired travelers need, namely, the locations of intersection features such as crosswalks. Key to our approach is the integration of IMU (inertial measurement unit) information with geometric information obtained from image panorama stitchings. Finally, we evaluate the localization performance of our algorithm on a dataset of intersection panoramas, demonstrating the feasibility of our approach.

[1]  Dragan Ahmetovic,et al.  Zebralocalizer: identification and localization of pedestrian crossings , 2011, Mobile HCI.

[2]  Richard Szeliski,et al.  Computer Vision - Algorithms and Applications , 2011, Texts in Computer Science.

[3]  James M. Coughlan,et al.  Smartphone-based crosswalk detection and localization for visually impaired pedestrians , 2013, 2013 IEEE International Conference on Multimedia and Expo Workshops (ICMEW).

[4]  Dieter Schmalstieg,et al.  Exploiting sensors on mobile phones to improve wide-area localization , 2012, Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012).

[5]  Adrian David Cheok,et al.  22nd International Conference on Human-Computer Interaction with Mobile Devices and Services , 2007, Lecture Notes in Computer Science.

[6]  Matthew A. Brown,et al.  Automatic Panoramic Image Stitching using Invariant Features , 2007, International Journal of Computer Vision.

[7]  Gaurav S. Sukhatme,et al.  Visual-Inertial Sensor Fusion: Localization, Mapping and Sensor-to-Sensor Self-calibration , 2011, Int. J. Robotics Res..

[8]  Aaron Steinfeld,et al.  Helping visually impaired users properly aim a camera , 2012, ASSETS '12.

[9]  Joan Aranda,et al.  Visual System to Help Blind People to Cross the Street , 2004, ICCHP.

[10]  James M Coughlan,et al.  Crosswatch: a System for Providing Guidance to Visually Impaired Travelers at Traffic Intersections. , 2013, Journal of assistive technologies.

[11]  Dieter Schmalstieg,et al.  Real-time self-localization from panoramic images on mobile devices , 2011, 2011 10th IEEE International Symposium on Mixed and Augmented Reality.

[12]  Gudrun Klinker,et al.  An outdoor ground truth evaluation dataset for sensor-aided visual handheld camera localization , 2013, 2013 IEEE International Symposium on Mixed and Augmented Reality (ISMAR).