Smartphone Based Outdoor Navigation and Obstacle Avoidance System for the Visually Impaired

Interlaced roads and unexpected obstacles restrict the blind from traveling. Existing outdoor blind auxiliary systems are bulky or costly, and some of them cannot even feedback the type or distance of obstacles. It is important for auxiliary blind systems to provide navigation, obstacle detection and ranging functions with affordable price and portable size. This paper presents an outdoor navigation system based on smartphone for the visually impaired, which can also help them avoid multi-type dangerous obstacles. Geographic information obtained from GPS receiving module is processed by professional navigation API to provide directional guidance. In order to help the visually impaired avoid obstacle, SSD-MobileNetV2 is retrained by a self-collected dataset with 4500 images, for better detecting the typical obstacles on the road, i.e. car, motorcycle, electric bicycle, bicycle, and pedestrian. Then, a light-weight monocular ranging method is employed to estimate the obstacle’s distance. Based on category and distance, the risk level of obstacle is evaluated, which is timely conveyed to the blind via different tunes. Field tests show that the retrained SSD-MobileNetV2 model can detect obstacles with considerable precision, and the vision-based ranging method can effectively estimate distance.

[1]  Trevor Darrell,et al.  Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation , 2013, 2014 IEEE Conference on Computer Vision and Pattern Recognition.

[2]  Mark Sandler,et al.  MobileNetV2: Inverted Residuals and Linear Bottlenecks , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[3]  Chandreyee Chowdhury,et al.  Divya-Dristi: A Smartphone based Campus Navigation System for the Visually Impaired , 2018, 2018 Fifth International Conference on Emerging Applications of Information Technology (EAIT).

[4]  Patrick Weber,et al.  OpenStreetMap: User-Generated Street Maps , 2008, IEEE Pervasive Computing.

[5]  Ali Farhadi,et al.  YOLO9000: Better, Faster, Stronger , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[6]  Bo Chen,et al.  MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications , 2017, ArXiv.

[7]  Andrew Zisserman,et al.  Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.

[8]  Songyot Nakariyakul,et al.  NavTU: Android Navigation App for Thai People with Visual Impairments , 2018, 2018 10th International Conference on Knowledge and Smart Technology (KST).

[9]  Ali Farhadi,et al.  YOLOv3: An Incremental Improvement , 2018, ArXiv.

[10]  Nicola Ivan Giannoccaro,et al.  An Outdoor Navigation System for Blind Pedestrians Using GPS and Tactile-Foot Feedback , 2018 .

[11]  Wei Liu,et al.  SSD: Single Shot MultiBox Detector , 2015, ECCV.

[12]  Kaiming He,et al.  Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[13]  Ali Farhadi,et al.  You Only Look Once: Unified, Real-Time Object Detection , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[14]  Ross B. Girshick,et al.  Fast R-CNN , 2015, 1504.08083.

[15]  Jian Sun,et al.  Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition , 2015, IEEE Trans. Pattern Anal. Mach. Intell..

[16]  Koen E. A. van de Sande,et al.  Selective Search for Object Recognition , 2013, International Journal of Computer Vision.

[17]  Don D. McMahon,et al.  Effects of Digital Navigation Aids on Adults With Intellectual Disabilities , 2015 .