Closing the Gap: Designing for the Last-Few-Meters Wayfinding Problem for People with Visual Impairments

Despite the major role of Global Positioning Systems (GPS) as a navigation tool for people with visual impairments (VI), a crucial missing aspect of point-to-point navigation with these systems is the last-few-meters wayfinding problem. Due to GPS inaccuracy and inadequate map data, systems often bring a user to the vicinity of a destination but not to the exact location, causing challenges such as difficulty locating building entrances or a specific storefront from a series of stores. In this paper, we study this problem space in two parts: (1) A formative study (N=22) to understand challenges, current resolution techniques, and user needs; and (2) A design probe study (N=13) using a novel, vision-based system called Landmark AI to understand how technology can better address aspects of this problem. Based on these investigations, we articulate a design space for systems addressing this challenge, along with implications for future systems to support precise navigation for people with VI.

[1]  V. Braun,et al.  Using thematic analysis in psychology , 2006 .

[2]  Wei Sun,et al.  Obstacle Detection System for Visually Impaired People Based on Stereo Vision , 2010, 2010 Fourth International Conference on Genetic and Evolutionary Computing.

[3]  Grzegorz Cielniak,et al.  A Portable Navigation System with an Adaptive Multimodal Interface for the Blind , 2017, AAAI Spring Symposia.

[4]  Jeffrey P. Bigham,et al.  VizWiz::LocateIt - enabling blind people to locate objects in their environment , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops.

[5]  Michael Milford,et al.  Place Recognition with ConvNet Landmarks: Viewpoint-Robust, Condition-Robust, Training-Free , 2015, Robotics: Science and Systems.

[6]  Aaron Steinfeld,et al.  Helping visually impaired users properly aim a camera , 2012, ASSETS '12.

[7]  Krzysztof Z. Gajos,et al.  Ability-Based Design: Concept, Principles and Examples , 2011, TACC.

[8]  Subhransu Maji,et al.  Deep filter banks for texture recognition and segmentation , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[9]  Anil A. Bharath,et al.  Mobile Visual Assistive Apps: Benchmarks of Vision Algorithm Performance , 2013, ICIAP Workshops.

[10]  Roberto Manduchi,et al.  The last meter: blind visual guidance to a target , 2014, CHI.

[11]  Jacob O. Wobbrock,et al.  Access lens: a gesture-based screen reader for real-world documents , 2013, CHI.

[12]  Bo Chen,et al.  MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications , 2017, ArXiv.

[13]  Michael Milford,et al.  Deep learning features at scale for visual place recognition , 2017, 2017 IEEE International Conference on Robotics and Automation (ICRA).

[14]  Hironobu Takagi,et al.  Assessment of Semantic Taxonomies for Blind Indoor Navigation Based on a Shopping Center Use Case , 2017, W4A.

[15]  M Dias Indoor Navigation Challenges for Visually Impaired People , 2015 .

[16]  Mary Beth Rosson,et al.  Scenario-based design , 2002 .

[17]  Chandrika Jayant,et al.  MobileAccessibility: camera focalization for blind and low-vision users on the go , 2010, ASAC.

[18]  R L Klatzky,et al.  Navigating without vision: basic and applied research. , 2001, Optometry and vision science : official publication of the American Academy of Optometry.

[19]  A. Viera,et al.  Understanding interobserver agreement: the kappa statistic. , 2005, Family medicine.

[20]  Paul Newman,et al.  FAB-MAP: Probabilistic Localization and Mapping in the Space of Appearance , 2008, Int. J. Robotics Res..

[21]  Jeffrey P. Bigham,et al.  Supporting blind photography , 2011, ASSETS.

[22]  Meredith Ringel Morris,et al.  FootNotes , 2018, Proc. ACM Interact. Mob. Wearable Ubiquitous Technol..

[23]  Chieko Asakawa,et al.  How Context and User Behavior Affect Indoor Navigation Assistance for Blind People , 2018, W4A.

[24]  Noah Snavely,et al.  Material recognition in the wild with the Materials in Context Database , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[25]  Roberto Manduchi,et al.  Cell Phone-based Wayfinding for the Visually Impaired , 2006 .

[26]  J. C. Flanagan Psychological Bulletin THE CRITICAL INCIDENT TECHNIQUE , 2022 .

[27]  Jeffrey P. Bigham,et al.  Real time object scanning using a mobile phone and cloud-based visual search engine , 2013, ASSETS.

[28]  Chieko Asakawa,et al.  Impact of Expertise on Interaction Preferences for Navigation Assistance of Visually Impaired Individuals , 2019, W4A.

[29]  Kostas E. Bekris,et al.  The user as a sensor: navigating users with visual impairments in indoor spaces using tactile landmarks , 2012, CHI.

[30]  Ko Nishino,et al.  Recognizing Material Properties from Images , 2018, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[31]  Hironobu Takagi,et al.  NavCog3: An Evaluation of a Smartphone-Based Blind Indoor Navigation Assistant with Semantic Features in a Large-Scale Environment , 2017, ASSETS.

[32]  Eun Yi Kim,et al.  A Vision-Based Wayfinding System for Visually Impaired People Using Situation Awareness and Activity-Based Instructions , 2017, Sensors.

[33]  Rob Miller,et al.  VizWiz: nearly real-time answers to visual questions , 2010, UIST.

[34]  Amy Hurst,et al.  "Pray before you step out": describing personal and situational blind navigation behaviors , 2013, ASSETS.

[35]  Jason Yosinski,et al.  Deep neural networks are easily fooled: High confidence predictions for unrecognizable images , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[36]  M. Serrão,et al.  Indoor Localization and Navigation for Blind Persons using Visual Landmarks and a GIS , 2012, DSAI.

[37]  Yang Tao,et al.  PERCEPT Indoor Wayfinding for Blind and Visually Impaired Users: Navigation Instructions Algorithm and Validation Framework , 2017, ICT4AgeingWell.

[38]  Li Fei-Fei,et al.  ImageNet: A large-scale hierarchical image database , 2009, CVPR.

[39]  Grzegorz Cielniak,et al.  Portable navigations system with adaptive multimodal interface for the blind , 2017, AAAI 2017.

[40]  Uran Oh,et al.  Current and future mobile and wearable device use by people with visual impairments , 2014, CHI.

[41]  Amy Hurst,et al.  Embracing Errors: Examining How Context of Use Impacts Blind Individuals' Acceptance of Navigation Aid Errors , 2017, CHI.

[42]  Harry Hochheiser,et al.  Research Methods for Human-Computer Interaction , 2008 .

[43]  Raymond Austin Jarvis,et al.  Towards a platform independent real-time panoramic vision based localisation system. , 2008, ICRA 2008.

[44]  Daniel Cremers,et al.  q-Space Deep Learning: Twelve-Fold Shorter and Model-Free Diffusion MRI Scans , 2016, IEEE Transactions on Medical Imaging.

[45]  Tushar Singh,et al.  PERCEPT: Indoor navigation for the blind and visually impaired , 2011, 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society.

[46]  Limin Zeng,et al.  A Survey of Outdoor Travel for Visually Impaired People Who Live in Latin-American Region , 2017, PETRA.

[47]  Michel Dhome,et al.  Body Mounted Vision System for Visually Impaired Outdoor and Indoor Wayfinding Assistance , 2007, CVHI.

[48]  Zhigang Zhu,et al.  Mobile Panoramic Vision for Assisting the Blind via Indexing and Localization , 2014, ECCV Workshops.

[49]  Hironobu Takagi,et al.  Insights on Assistive Orientation and Mobility of People with Visual Impairment Based on Large-Scale Longitudinal Data , 2018, ACM Trans. Access. Comput..

[50]  Yang Tao,et al.  PERCEPT-II: Smartphone based indoor navigation system for the blind , 2014, 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society.

[51]  Xiaodong Yang,et al.  Computer Vision-Based Door Detection for Accessibility of Unfamiliar Environments to Blind Persons , 2010, ICCHP.

[52]  Eelke Folmer,et al.  Headlock: a wearable navigation aid that helps blind cane users traverse large open spaces , 2014, ASSETS.

[53]  Xiangyu Zhang,et al.  ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[54]  J. F. Kelley,et al.  An iterative design methodology for user-friendly natural language office information applications , 1984, TOIS.

[55]  Peter I. Corke,et al.  Visual Place Recognition: A Survey , 2016, IEEE Transactions on Robotics.

[56]  Zhi-Hong Mao,et al.  Landmark-based indoor positioning for visually impaired individuals , 2014, 2014 12th International Conference on Signal Processing (ICSP).

[57]  Reginald G. Golledge,et al.  Place recognition and wayfinding: Making sense of space , 1992 .

[58]  R. Welsh Foundations of Orientation and Mobility , 1979 .

[59]  James M. Coughlan,et al.  Towards a Real-Time System for Finding and Reading Signs for Visually Impaired Users , 2012, ICCHP.

[60]  Xiaofeng Ren,et al.  Toward Robust Material Recognition for Everyday Objects , 2011, BMVC.

[61]  Aura Ganz,et al.  Egocentric Landmark-Based Indoor Guidance System for the Visually Impaired , 2017, Int. J. E Health Medical Commun..

[62]  Forrest N. Iandola,et al.  SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <1MB model size , 2016, ArXiv.

[63]  Limin Zeng,et al.  A Survey: Outdoor Mobility Experiences by the Visually Impaired , 2015, MuC Workshopband.