Crowdsourcing the Installation and Maintenance of Indoor Localization Infrastructure to Support Blind Navigation

Indoor navigation systems can make unfamiliar buildings more accessible for people with vision impairments, but their adoption is hampered by the effort of installing infrastructure and maintaining it over time. Most solutions in this space require augmenting the environment with add-ons, such as Bluetooth beacons. Installing and calibrating such infrastructure requires time and expertise. Once installed, localization accuracy often degrades over time as batteries die, beacons go missing, or otherwise stop working. Even localization systems installed by experts can become unreliable weeks, months, or years after the installation. To address this problem, we created LuzDeploy: a physical crowdsourcing system that organizes non-experts for the installation and long-term maintenance of a Bluetooth-based navigation system. LuzDeploy simplifies the tasks required to install and maintain the localization infrastructure, thus making a crowdsourcing approach feasible for non-experts. We report on a field deployment where 127 participants installed and maintained a blind navigation system over several months in a 7-story building, completing 455 tasks in total. We compare the accuracy of the system installed by participants to an installation completed by experts with specialized equipment. LuzDeploy aims to improve the sustainability of indoor navigation systems to encourage widespread adoption outside of research settings.

[1]  Roberto Manduchi,et al.  Zebra Crossing Spotter: Automatic Population of Spatial Databases for Increased Safety of Blind Travelers , 2015, ASSETS.

[2]  Ralf Heese,et al.  Ontology based Recruitment Process , 2004, GI Jahrestagung.

[3]  Hironobu Takagi,et al.  NavCog3: An Evaluation of a Smartphone-Based Blind Indoor Navigation Assistant with Semantic Features in a Large-Scale Environment , 2017, ASSETS.

[4]  Daniel J. Veit,et al.  More than fun and money. Worker Motivation in Crowdsourcing - A Study on Mechanical Turk , 2011, AMCIS.

[5]  R. Klatzky,et al.  COGNITIVE MAPPING AND WAYFINDING BY ADULTS WITHOUT VISION , 1996 .

[6]  Steven Li,et al.  Crowdsourced Fabrication , 2016, UIST.

[7]  Erin Brady,et al.  Using social microvolunteering to answer visual questions from blind users , 2015, ASAC.

[8]  Mor Naaman,et al.  The motivations and experiences of the on-demand mobile workforce , 2014, CSCW.

[9]  Hironobu Takagi,et al.  Collaborative web accessibility improvement: challenges and possibilities , 2009, Assets '09.

[10]  Archan Misra,et al.  A campus-scale mobile crowd-tasking platform , 2016, UbiComp Adjunct.

[11]  K. D. Borne,et al.  The Zooniverse: A Framework for Knowledge Discovery from Citizen Science Data , 2011 .

[12]  Michael S. Bernstein,et al.  Catalyst: triggering collective action with thresholds , 2014, CSCW.

[13]  Jon Froehlich,et al.  Combining crowdsourcing and google street view to identify street-level accessibility problems , 2013, CHI.

[14]  Shin Saito,et al.  Introducing game elements in crowdsourced video captioning by non-experts , 2014, W4A.

[15]  Aura Ganz,et al.  INSIGHT: RFID and Bluetooth enabled automated space for the blind and visually impaired , 2010, 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology.

[16]  Scott R. Klemmer,et al.  Shepherding the crowd yields better work , 2012, CSCW.

[17]  Adrien Treuille,et al.  Predicting protein structures with a multiplayer online game , 2010, Nature.

[18]  Michael S. Bernstein,et al.  Human-Computer Interaction and Collective Intelligence , 2014 .

[19]  Peter Norvig,et al.  Can Distributed Volunteers Accomplish Massive Data Analysis Tasks , 2001 .

[20]  Michael S. Bernstein,et al.  Micro-volunteering: helping the helpers in development , 2013, CSCW '13.

[21]  Elizabeth Gerber,et al.  Computer supported collective action , 2014, INTR.

[22]  Hironobu Takagi,et al.  Supporting Orientation of People with Visual Impairment: Analysis of Large Scale Usage Data , 2016, ASSETS.

[23]  Jyri Rajamäki,et al.  LaureaPOP indoor navigation service for the visually impaired in a WLAN environment , 2007 .

[24]  Tomohiro Tanikawa,et al.  Indoor Marker-based Localization Using Coded Seamless Pattern for Interior Decoration , 2007, 2007 IEEE Virtual Reality Conference.

[25]  Mark S. Fox,et al.  Semantic Matchmaking for Job Recruitment: An Ontology-Based Hybrid Approach , 2009 .

[26]  Masayuki Murata,et al.  Achieving Practical and Accurate Indoor Navigation for People with Visual Impairments , 2017, W4A.

[27]  Abdelsalam Helal,et al.  RFID information grid for blind navigation and wayfinding , 2005, Ninth IEEE International Symposium on Wearable Computers (ISWC'05).

[28]  Elizabeth Gerber,et al.  WeDo: End-To-End Computer Supported Collective Action , 2014, ICWSM.

[29]  Björn Hartmann,et al.  CommunitySourcing: engaging local crowds to perform expert work via physical kiosks , 2012, CHI.

[30]  Imran Ghani,et al.  Ontology Matching Approaches for eRecruitment , 2012 .

[31]  R. Welsh Foundations of Orientation and Mobility , 1979 .

[32]  Johannes Schöning,et al.  The Geography of Pokémon GO: Beneficial and Problematic Effects on Places and Movement , 2017, CHI.

[33]  Kostas E. Bekris,et al.  Indoor Human Navigation Systems: A Survey , 2013, Interact. Comput..

[34]  Tobias Höllerer,et al.  Botivist: Calling Volunteers to Action using Online Bots , 2015, CSCW.

[35]  Abdelsalam Helal,et al.  Drishti: an integrated indoor/outdoor blind navigation system and service , 2004, Second IEEE Annual Conference on Pervasive Computing and Communications, 2004. Proceedings of the.

[36]  Hironobu Takagi,et al.  Motivating Multi-Generational Crowd Workers in Social-Purpose Work , 2015, CSCW.

[37]  Jeffrey P. Bigham,et al.  VizWiz: nearly real-time answers to visual questions , 2010, W4A.

[38]  Paramvir Bahl,et al.  RADAR: an in-building RF-based user location and tracking system , 2000, Proceedings IEEE INFOCOM 2000. Conference on Computer Communications. Nineteenth Annual Joint Conference of the IEEE Computer and Communications Societies (Cat. No.00CH37064).

[39]  Marko Modsching,et al.  Field trial on GPS Accuracy in a medium size city: The influence of built- up 1 , 2006 .

[40]  Kentaro Ishii,et al.  Pebbles: User-Configurable Device Network for Robot Navigation , 2013, INTERACT.

[41]  Hironobu Takagi,et al.  NavCog: a navigational cognitive assistant for the blind , 2016, MobileHCI.

[42]  Bruno Sinopoli,et al.  ALPS: A Bluetooth and Ultrasound Platform for Mapping and Localization , 2015, SenSys.

[43]  Elizabeth Gerber,et al.  Enabling Physical Crowdsourcing On-the-Go with Context-Sensitive Notifications , 2015, HCOMP.

[44]  Archan Misra,et al.  TASKer: behavioral insights via campus-based experimental mobile crowd-sourcing , 2016, UbiComp.

[45]  Meredith Ringel Morris,et al.  Gauging Receptiveness to Social Microvolunteering , 2015, CHI.

[46]  M. Bernardine Dias,et al.  Robust Indoor Localization on a Commercial Smart Phone , 2012, ANT/MobiWIS.

[47]  Marti L. Riemer-Reiss,et al.  Factors Associated with Assistive Technology Discontinuance among Individuals with Disabilities , 2000 .

[48]  Luis von Ahn Games with a Purpose , 2006, Computer.

[49]  Roberto Manduchi,et al.  Mobile Vision as Assistive Technology for the Blind: An Experimental Study , 2012, ICCHP.

[50]  D. Tuttle Self-Esteem and Adjusting With Blindness: The Process of Responding to Life's Demands , 1984 .

[51]  Jorge Sá Silva,et al.  Information and Assisted Navigation System for Blind People , 2014, International Journal on Smart Sensing and Intelligent Systems.

[52]  Jorge Gonçalves,et al.  Crowdsourcing on the spot: altruistic use of public displays, feasibility, performance, and behaviours , 2013, UbiComp.

[53]  Bin Zhu,et al.  Skill ontology-based semantic model and its matching algorithm , 2006, 2006 7th International Conference on Computer-Aided Industrial Design and Conceptual Design.

[54]  Eric Horvitz,et al.  Crowdphysics: Planned and Opportunistic Crowdsourcing for Physical Tasks , 2013, ICWSM.

[55]  Rob Miller,et al.  VizWiz: nearly real-time answers to visual questions , 2010, UIST.