Navigation Assistance for the Visually Impaired Using RGB-D Sensor With Range Expansion

Navigation assistance for visually impaired (NAVI) refers to systems that are able to assist or guide people with vision loss, ranging from partially sighted to totally blind, by means of sound commands. In this paper, a new system for NAVI is presented based on visual and range information. Instead of using several sensors, we choose one device, a consumer RGB-D camera, and take advantage of both range and visual information. In particular, the main contribution is the combination of depth information with image intensities, resulting in the robust expansion of the range-based floor segmentation. On one hand, depth information, which is reliable but limited to a short range, is enhanced with the long-range visual information. On the other hand, the difficult and prone-to-error image processing is eased and improved with depth information. The proposed system detects and classifies the main structural elements of the scene providing the user with obstacle-free paths in order to navigate safely across unknown scenarios. The proposed system has been tested on a wide variety of scenarios and data sets, giving successful results and showing that the system is robust and works in challenging indoor environments.

[1]  John F. Canny,et al.  A Computational Approach to Edge Detection , 1986, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[2]  Robert C. Bolles,et al.  Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography , 1981, CACM.

[3]  J.M. Alvarez,et al.  Illuminant-invariant model-based road segmentation , 2008, 2008 IEEE Intelligent Vehicles Symposium.

[4]  S. Yaacob,et al.  Application of stereovision in a navigation aid for blind people , 2003, Fourth International Conference on Information, Communications and Signal Processing, 2003 and the Fourth Pacific Rim Conference on Multimedia. Proceedings of the 2003 Joint.

[5]  Stanley T. Birchfield,et al.  Image-based segmentation of indoor corridor floors for a mobile robot , 2010, 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[6]  Robert Ivlev,et al.  A survey and experimental evaluation of proximity sensors for space robotics , 1994, Proceedings of the 1994 IEEE International Conference on Robotics and Automation.

[7]  First Tatsuya Seto,et al.  A navigation system for the visually impaired using colored navigation lines and RFID tags , 2009, 2009 Annual International Conference of the IEEE Engineering in Medicine and Biology Society.

[8]  Jizhong Xiao,et al.  Semantic Indoor Navigation with a Blind-User Oriented Augmented Reality , 2013, 2013 IEEE International Conference on Systems, Man, and Cybernetics.

[9]  Josechu J. Guerrero,et al.  Full scaled 3D visual odometry from a single wearable omnidirectional camera , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[10]  Mahmoud Al-Qutayri,et al.  An integrated wireless indoor navigation system for visually impaired , 2011, 2011 IEEE International Systems Conference.

[11]  C. E. Pereira,et al.  Analysis and design of an embedded system to aid the navigation of the visually impaired , 2013, 2013 ISSNIP Biosignals and Biorobotics Conference: Biosignals and Robotics for Better and Safer Living (BRC).

[12]  Ian D. Reid,et al.  Manhattan scene understanding using monocular, stereo, and 3D features , 2011, 2011 International Conference on Computer Vision.

[13]  Yonina C. Eldar,et al.  A probabilistic Hough transform , 1991, Pattern Recognit..

[14]  Harald Reiterer,et al.  NAVI - A Proof-of-Concept of a Mobile Navigational Aid for Visually Impaired Based on the Microsoft Kinect , 2011, INTERACT.

[15]  Karsten Berns,et al.  3D obstacle detection and avoidance in vegetated off-road terrain , 2008, 2008 IEEE International Conference on Robotics and Automation.

[16]  W MurrayDavid,et al.  On the choice and placement of wearable vision sensors , 2009 .

[17]  David W. Murray,et al.  On the Choice and Placement of Wearable Vision Sensors , 2009, IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans.

[18]  Josechu J. Guerrero,et al.  Spatial layout recovery from a single omnidirectional image and its matching-free sequential propagation , 2014, Robotics Auton. Syst..

[19]  Sebastian Thrun,et al.  Self-supervised Monocular Road Detection in Desert Terrain , 2006, Robotics: Science and Systems.

[20]  Nobuo Ezaki,et al.  Kinect cane: Object recognition aids for the visually impaired , 2013, 2013 6th International Conference on Human System Interactions (HSI).

[21]  Josechu J. Guerrero,et al.  Omnidirectional Vision for Indoor Spatial Layout Recovery , 2012, IAS.

[22]  T. Kanade,et al.  Geometric reasoning for single image structure recovery , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.

[23]  Nikolaos G. Bourbakis,et al.  Wearable Obstacle Avoidance Electronic Travel Aids for Blind: A Survey , 2010, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews).

[24]  G. Medioni,et al.  RGB-D camera Based Navigation for the Visually Impaired , 2011 .

[25]  Rolf Adams,et al.  Seeded Region Growing , 1994, IEEE Trans. Pattern Anal. Mach. Intell..

[26]  Jitendra Malik,et al.  Perceptual Organization and Recognition of Indoor Scenes from RGB-D Images , 2013, 2013 IEEE Conference on Computer Vision and Pattern Recognition.

[27]  Derek Hoiem,et al.  Recovering the spatial layout of cluttered rooms , 2009, 2009 IEEE 12th International Conference on Computer Vision.

[28]  R. Oktem,et al.  An indoor navigation aid designed for visually impaired people , 2008, 2008 34th Annual Conference of IEEE Industrial Electronics.

[29]  Jason J. Liu,et al.  Video-based localization without 3D mapping for the visually impaired , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops.

[30]  W. Marsden I and J , 2012 .

[31]  Mo M. Jamshidi,et al.  Sonar-Based Rover Navigation for Single or Multiple Platforms: Forward Safe Path and Target Switching Approach , 2008, IEEE Systems Journal.

[32]  Jan-Michael Frahm,et al.  USAC: A Universal Framework for Random Sample Consensus , 2013, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[33]  Hong Liu,et al.  Segment and Label Indoor Scene Based on RGB-D for the Visually Impaired , 2014, MMM.

[34]  B. S. Manjunath,et al.  Color image segmentation , 1999, Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149).

[35]  Stan Birchfield,et al.  Real-time obstacle detection and avoidance in the presence of specular surfaces using an active 3D sensor , 2013, 2013 IEEE Workshop on Robot Vision (WORV).

[36]  Guido M. Cortelazzo,et al.  Fusion of Geometry and Color Information for Scene Segmentation , 2012, IEEE Journal of Selected Topics in Signal Processing.

[37]  Illah R. Nourbakhsh,et al.  Appearance-Based Obstacle Detection with Monocular Color Vision , 2000, AAAI/IAAI.