A Navigation System for the Visually Impaired: A Fusion of Vision and Depth Sensor

For a number of years, scientists have been trying to develop aids that can make visually impaired people more independent and aware of their surroundings. Computer-based automatic navigation tools are one example of this, motivated by the increasing miniaturization of electronics and the improvement in processing power and sensing capabilities. This paper presents a complete navigation system based on low cost and physically unobtrusive sensors such as a camera and an infrared sensor. The system is based around corners and depth values from Kinect's infrared sensor. Obstacles are found in images from a camera using corner detection, while input from the depth sensor provides the corresponding distance. The combination is both efficient and robust. The system not only identifies hurdles but also suggests a safe path (if available) to the left or right side and tells the user to stop, move left, or move right. The system has been tested in real time by both blindfolded and blind people at different indoor and outdoor locations, demonstrating that it operates adequately.

[1]  Shoaib Ehsan,et al.  Are Performance Differences of Interest Operators Statistically Significant? , 2011, CAIP.

[2]  Tom Drummond,et al.  Edge landmarks in monocular SLAM , 2009, Image Vis. Comput..

[3]  Cordelia Schmid,et al.  A Comparison of Affine Region Detectors , 2005, International Journal of Computer Vision.

[4]  Anita M. Flynn,et al.  Combining Sonar and Infrared Sensors for Mobile Robot Navigation , 1988, Int. J. Robotics Res..

[5]  Michael Lawo,et al.  Wearable Navigation System for the Visually Impaired and Blind People , 2012, 2012 IEEE/ACIS 11th International Conference on Computer and Information Science.

[6]  John F. Canny,et al.  A Computational Approach to Edge Detection , 1986, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[7]  Tony Lindeberg,et al.  Shape-adapted smoothing in estimation of 3-D shape cues from affine deformations of local 2-D brightness structure , 1997, Image Vis. Comput..

[8]  Steve Mann,et al.  Blind navigation with a wearable range camera and vibrotactile helmet , 2011, ACM Multimedia.

[9]  Filippo Petroni,et al.  A new hybrid infrared-ultrasonic electronic travel aids for blind people , 2013 .

[10]  Arun K. Majumdar,et al.  An electronic travel aid for navigation of visually impaired persons , 2011, 2011 Third International Conference on Communication Systems and Networks (COMSNETS 2011).

[11]  Fernando Santos Osório,et al.  3D Vision-Based Autonomous Navigation System Using ANN and Kinect Sensor , 2012, EANN.

[12]  Selim Benhimane,et al.  Homography-based 2D Visual Tracking and Servoing , 2007, Int. J. Robotics Res..

[13]  Dugki Min,et al.  Two Hand Gesture Recognition Using Stereo Camera , 2013 .

[14]  Dorra Sellami Masmoudi,et al.  New electronic cane for visually impaired people for obstacle detection and recognition , 2012, 2012 IEEE International Conference on Vehicular Electronics and Safety (ICVES 2012).

[15]  M. Bousbia-Salah,et al.  An Ultrasonic Navigation System for Blind People , 2007, 2007 IEEE International Conference on Signal Processing and Communications.

[16]  Christopher G. Harris,et al.  A Combined Corner and Edge Detector , 1988, Alvey Vision Conference.

[17]  Shraga Shoval,et al.  NavBelt and the Guide-Cane [obstacle-avoidance systems for the blind and visually impaired] , 2003, IEEE Robotics Autom. Mag..

[18]  Jake K. Aggarwal,et al.  Human detection using depth information by Kinect , 2011, CVPR 2011 WORKSHOPS.

[19]  Maya Cakmak,et al.  Designing a Robot Guide for Blind People in Indoor Environments , 2015, HRI.

[20]  Michael Riis Andersen,et al.  Kinect Depth Sensor Evaluation for Computer Vision Applications , 2012 .

[21]  Tom Drummond,et al.  Machine Learning for High-Speed Corner Detection , 2006, ECCV.

[22]  Denis Pellerin,et al.  Navigating from a Depth Image Converted into Sound , 2015, Applied bionics and biomechanics.

[23]  Nagui M. Rouphail,et al.  Exploratory Simulation of Pedestrian Crossings at Roundabouts , 2005 .

[24]  Andrei Bursuc,et al.  A Smartphone-Based Obstacle Detection and Classification System for Assisting Visually Impaired People , 2013, 2013 IEEE International Conference on Computer Vision Workshops.

[25]  A. F. Clark,et al.  Extracting planar features from Kinect sensor , 2012, 2012 4th Computer Science and Electronic Engineering Conference (CEEC).

[26]  Kurt Konolige,et al.  CenSurE: Center Surround Extremas for Realtime Feature Detection and Matching , 2008, ECCV.

[27]  Juho Kannala,et al.  Accurate and Practical Calibration of a Depth and Color Camera Pair , 2011, CAIP.

[28]  Zhigang Zhu,et al.  KinDectect: Kinect Detecting Objects , 2012, ICCHP.