RoboGuideDog: Guiding Blind users Through Physical Environments with Laser Range Scanners

Abstract In this paper we discuss initial concepts of the development of a fully automatic guide dog system for blind users. The physical scene is scanned using a laser range device, and the three dimensional point cloud measurements are analyzed and transformed into a description of the environment that is communicated to the user via synthetic speech and/or haptic feedback allowing the user to navigate around physical space.

[1]  Vicente Milanés Montero,et al.  Intelligent automatic overtaking system using vision for vehicle detection , 2012, Expert Syst. Appl..

[2]  Christopher D. Hundhausen,et al.  An empirical investigation into the design of auditory cues to enhance computer program comprehension , 2011, Int. J. Hum. Comput. Stud..

[3]  Maamar Bettayeb,et al.  A Navigation Aid for Blind People , 2011, J. Intell. Robotic Syst..

[4]  Florentin Wörgötter,et al.  A VLSI-Compatible Computer Vision Algorithm for Stereoscopic Depth Analysis in Real-Time , 2002, International Journal of Computer Vision.

[5]  José María Armingol,et al.  Driver drowsiness detection system under infrared illumination for an intelligent vehicle , 2011 .

[6]  Dariu Gavrila,et al.  Active Pedestrian Safety by Automatic Braking and Evasive Steering , 2011, IEEE Transactions on Intelligent Transportation Systems.

[7]  Payman Moallem,et al.  A robust negative obstacle detection method using seed-growing and dynamic programming for visually-impaired/blind persons , 2011 .

[8]  Robert C. Bolles,et al.  Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography , 1981, CACM.

[9]  Edward J. Delp,et al.  A Low Complexity Sign Detection and Text Localization Method for Mobile Applications , 2011, IEEE Transactions on Multimedia.

[10]  Yoshiaki Shirai,et al.  Probabilistic map building considering sensor visibility for mobile robot , 2007, 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[11]  Neville A. Stanton,et al.  Detection of new in-path targets by drivers using Stop & Go Adaptive Cruise Control. , 2011, Applied ergonomics.

[12]  J Faria,et al.  Electronic white cane for blind people navigation assistance , 2010, 2010 World Automation Congress.

[13]  Joshué Pérez,et al.  Making transport safer: V2V-based automated emergency braking system , 2011 .

[14]  Simon Meers,et al.  A vision system for providing 3D perception of the environment via transcutaneous electro-neural stimulation , 2004, Proceedings. Eighth International Conference on Information Visualisation, 2004. IV 2004..

[15]  Chan-Soo Park,et al.  Comparison of plane extraction performance using laser scanner and Kinect , 2011, 2011 8th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI).

[16]  Ebba Þóra Hvannberg,et al.  Accessibility of audio and tactile interfaces for young blind people performing everyday tasks , 2010, Universal Access in the Information Society.

[17]  Aimilios Chalamandaris,et al.  A unit selection text-to-speech synthesis system optimized for use with screen readers , 2010, IEEE Transactions on Consumer Electronics.

[18]  Gérard G. Medioni,et al.  Robot vision for the visually impaired , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops.

[19]  Elsevier Sdol International Journal of Human-Computer Studies , 2009 .

[20]  Radu Bogdan Rusu,et al.  3D is here: Point Cloud Library (PCL) , 2011, 2011 IEEE International Conference on Robotics and Automation.

[21]  Harry Hochheiser,et al.  Revisiting breadth vs. depth in menu structures for blind users of screen readers , 2010, Interact. Comput..

[22]  Nobuo Ezaki,et al.  DEVELOPMENT OF PEN-BASED NOTE-TAKING SYSTEM FOR PERSONS WITH VISUALLY DISABILITIES , 2009 .