Embedded soundscape rendering for the visually impaired

The objective of this work is to improve the quality of life for the visually impaired by enhancing the ability of self navigating. Our system provides a 3D audio representation of the environment by synthesizing virtual sound sources corresponding to obstacles or as a guide for a safe path. The key characteristics of our system are low computational complexity and a simple user customization method. Low complexity makes our system suitable for a resource constrained embedded platform, such as a portable device, while assuring the real time reproduction of the auditive stimuli. In the paper, we discuss the basic perception model, its implementation, and experimental results that show the effectiveness of the approach.

[1]  Durand R. Begault,et al.  3-D Sound for Virtual Reality and Multimedia Cambridge , 1994 .

[2]  C. Avendano,et al.  The CIPIC HRTF database , 2001, Proceedings of the 2001 IEEE Workshop on the Applications of Signal Processing to Audio and Acoustics (Cat. No.01TH8575).

[3]  Richard O. Duda,et al.  A structural model for binaural sound synthesis , 1998, IEEE Trans. Speech Audio Process..

[4]  G. Murch Visual and auditory perception , 1972 .

[5]  Mark R. Anderson,et al.  Direct comparison of the impact of head tracking, reverberation, and individualized head-related transfer functions on the spatial perception of a virtual speech source. , 2001, Journal of the Audio Engineering Society. Audio Engineering Society.

[6]  Frantisek Rund,et al.  Alternatives to HRTF measurement , 2012, 2012 35th International Conference on Telecommunications and Signal Processing (TSP).

[7]  R. Passerone,et al.  System level design paradigms: Platform-based design and communication synthesis , 2004 .