A Smart Context-Aware Hazard Attention System to Help People with Peripheral Vision Loss

Peripheral vision loss results in the inability to detect objects in the peripheral visual field which affects the ability to evaluate and avoid potential hazards. A different number of assistive navigation systems have been developed to help people with vision impairments using wearable and portable devices. Most of these systems are designed to search for obstacles and provide safe navigation paths for visually impaired people without any prioritisation of the degree of danger for each hazard. This paper presents a new context-aware hybrid (indoor/outdoor) hazard classification assistive technology to help people with peripheral vision loss in their navigation using computer-enabled smart glasses equipped with a wide-angle camera. Our proposed system augments users’ existing healthy vision with suitable, meaningful and smart notifications to attract the user’s attention to possible obstructions or hazards in their peripheral field of view. A deep learning object detector is implemented to recognise static and moving objects in real time. After detecting the objects, a Kalman Filter multi-object tracker is used to track these objects over time to determine the motion model. For each tracked object, its motion model represents its way of moving around the user. Motion features are extracted while the object is still in the user’s field of vision. These features are then used to quantify the danger using five predefined hazard classes using a neural network-based classifier. The classification performance is tested on both publicly available and private datasets and the system shows promising results with up to 90% True Positive Rate (TPR) associated with as low as 7% False Positive Rate (FPR), 13% False Negative Rate (FNR) and an average testing Mean Square Error (MSE) of 8.8%. The provided hazard type is then translated into a smart notification to increase the user’s cognitive perception using the healthy vision within the visual field. A participant study was conducted with a group of patients with different visual field defects to explore their feedback about the proposed system and the notification generation stage. The real-world outdoor evaluation of human subjects is planned to be performed in our near future work.

[1]  Luc Van Gool,et al.  The Pascal Visual Object Classes Challenge: A Retrospective , 2014, International Journal of Computer Vision.

[2]  Jörg Conradt,et al.  A mobility device for the blind with improved vertical resolution using dynamic vision sensors , 2016, 2016 IEEE 18th International Conference on e-Health Networking, Applications and Services (Healthcom).

[3]  Eli Peli,et al.  P-16: Augmented View for Tunnel Vision: Device Testing by Patients in Real Environments , 2001 .

[4]  Wei Liu,et al.  SSD: Single Shot MultiBox Detector , 2015, ECCV.

[5]  Bing Li,et al.  ISANA: Wearable Context-Aware Indoor Assistive Navigation with Obstacle Avoidance for the Blind , 2016, ECCV Workshops.

[6]  J M J Roodhooft,et al.  Leading causes of blindness worldwide. , 2002, Bulletin de la Societe belge d'ophtalmologie.

[7]  Woodrow Barfield,et al.  Fundamentals of Wearable Computers and Augumented Reality , 2000 .

[8]  Roberto Cipolla,et al.  Segmentation and Recognition Using Structure from Motion Point Clouds , 2008, ECCV.

[9]  Douglas R. Anderson Automated Static Perimetry , 1992 .

[10]  Kaiming He,et al.  Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[11]  B. Fischer,et al.  Visual field representations and locations of visual areas V1/2/3 in human visual cortex. , 2003, Journal of vision.

[12]  I. Rentschler,et al.  Peripheral vision and pattern recognition: a review. , 2011, Journal of vision.

[13]  Р Ю Чуйков,et al.  Обнаружение транспортных средств на изображениях загородных шоссе на основе метода Single shot multibox Detector , 2017 .

[14]  Soh-Khim Ong,et al.  Virtual and Augmented Reality Applications in Manufacturing , 2004, MIM.

[15]  M. Carrasco,et al.  The eccentricity effect: Target eccentricity affects performance on conjunction searches , 1995, Perception & psychophysics.

[16]  Ruxandra Tapu,et al.  DEEP-SEE: Joint Object Detection, Tracking and Recognition with Application to Visually Impaired Navigational Assistance , 2017, Sensors.

[17]  R. Millham Context-Aware Systems: A More Appropriate Response System to Hurricanes and Other Natural Disasters , 2014, Complex Adaptive Systems.

[18]  Rabia Jafri,et al.  User-centered design of a depth data based obstacle detection and avoidance system for the visually impaired , 2018, Human-centric Computing and Information Sciences.

[19]  Pietro Perona,et al.  Microsoft COCO: Common Objects in Context , 2014, ECCV.

[20]  Khaled M. Elleithy,et al.  Sensor-Based Assistive Devices for Visually-Impaired People: Current Status, Challenges, and Future Directions , 2017, Sensors.

[21]  Bing Li,et al.  Assisting blind people to avoid obstacles: An wearable obstacle stereo feedback system based on 3D detection , 2015, 2015 IEEE International Conference on Robotics and Biomimetics (ROBIO).

[22]  Richard Szeliski,et al.  Computer Vision - Algorithms and Applications , 2011, Texts in Computer Science.

[23]  Yuichi Ohta,et al.  Mixed Reality: Merging Real and Virtual Worlds , 1999 .

[24]  Seth R Flaxman,et al.  Prevalence and causes of vision loss in high-income countries and in Eastern and Central Europe: 1990–2010 , 2014, British Journal of Ophthalmology.

[25]  Waleed Al-Nuaimy,et al.  A Hazard Detection and Tracking System for People with Peripheral Vision Loss using Smart Glasses and Augmented Reality , 2019, International Journal of Advanced Computer Science and Applications.

[26]  Mari Ervasti,et al.  Touch- and audio-based medication management service concept for vision impaired older people , 2011, 2011 IEEE International Conference on RFID-Technologies and Applications.

[27]  Gregory D. Abowd,et al.  Towards a Better Understanding of Context and Context-Awareness , 1999, HUC.

[28]  Ali Farhadi,et al.  You Only Look Once: Unified, Real-Time Object Detection , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[29]  Takeo Kanade,et al.  Computer Vision and Image Understanding Computer Vision for Assistive Technologies , 2022 .

[30]  Roberto Cipolla,et al.  Semantic object classes in video: A high-definition ground truth database , 2009, Pattern Recognit. Lett..

[31]  Abir Benabidvww,et al.  User involvement in the development of indoor navigation system for the visually impaired: A needs-finding study , 2014, 2014 3rd International Conference on User Science and Engineering (i-USEr).

[32]  N. A. Bradley,et al.  Assistive Technology For Visually Impaired And Blind People , 2008 .

[33]  Sazali Yaacob,et al.  Wearable Real-Time Stereo Vision for the Visually Impaired , 2007, Eng. Lett..

[34]  Harold W. Kuhn,et al.  The Hungarian method for the assignment problem , 1955, 50 Years of Integer Programming.

[35]  João Barroso,et al.  Gathering the Users' Needs in the Development of Assistive Technology: A Blind Navigation System Use Case , 2013, HCI.

[36]  Bill N. Schilit,et al.  Context-aware computing applications , 1994, Workshop on Mobile Computing Systems and Applications.

[37]  Bing Li,et al.  Vision-Based Mobile Indoor Assistive Navigation Aid for Blind People , 2019, IEEE Transactions on Mobile Computing.

[38]  Markus Funk,et al.  Context-aware assistive systems at the workplace: analyzing the effects of projection and gamification , 2014, PETRA.

[39]  Kailun Yang,et al.  Long-Range Traversability Awareness and Low-Lying Obstacle Negotiation with RealSense for the Visually Impaired , 2018 .

[40]  Nikolaos G. Bourbakis,et al.  Wearable Obstacle Avoidance Electronic Travel Aids for Blind: A Survey , 2010, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews).

[41]  Ilenia Tinnirello,et al.  Enhancing tracking performance in a smartphone-based navigation system for visually impaired people , 2016, 2016 24th Mediterranean Conference on Control and Automation (MED).

[42]  Eelke Folmer,et al.  Headlock: a wearable navigation aid that helps blind cane users traverse large open spaces , 2014, ASSETS.

[43]  B. Nayak,et al.  Interpretation of autoperimetry , 2014 .

[44]  Bo Chen,et al.  MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications , 2017, ArXiv.

[45]  Waleed Al-Nuaimy,et al.  Real-time Detection of Wearable Camera Motion Using Optical Flow , 2018, 2018 IEEE Congress on Evolutionary Computation (CEC).

[46]  Guido Bologna,et al.  Obstacle and planar object detection using sparse 3D information for a smart walker , 2014, 2014 International Conference on Computer Vision Theory and Applications (VISAPP).

[47]  Ilenia Tinnirello,et al.  ARIANNA: A smartphone-based navigation system with human in the loop , 2014, 22nd Mediterranean Conference on Control and Automation.