Automatic image conversion to tactile graphic

Currently, individuals who are blind or visually impaired have limited resources to allow them to interpret information contained in images. The aim of this project is to provide an accessible system to automatically generate tactile graphics for those who need to interpret information contained in visual images. The fundamental steps to accomplish this are to segment and simplify the image. The focus of this paper will be on several methods to segment an image.

[1]  Richard E. Ladner,et al.  Automated tactile graphics translation: in the field , 2007, Assets '07.

[2]  Dorin Comaniciu,et al.  Mean Shift: A Robust Approach Toward Feature Space Analysis , 2002, IEEE Trans. Pattern Anal. Mach. Intell..

[3]  Jitendra Malik,et al.  Normalized Cuts and Image Segmentation , 2000, IEEE Trans. Pattern Anal. Mach. Intell..

[4]  Jitendra Malik,et al.  Normalized cuts and image segmentation , 1997, Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[5]  Tony F. Chan,et al.  A Multiphase Level Set Framework for Image Segmentation Using the Mumford and Shah Model , 2002, International Journal of Computer Vision.

[6]  T P Way,et al.  Automatic visual to tactile translation--Part II: Evaluation of the TACTile Image Creation System. , 1997, IEEE transactions on rehabilitation engineering : a publication of the IEEE Engineering in Medicine and Biology Society.

[7]  Allan D. Jepson,et al.  Spectral Embedding and Min Cut for Image Segmentation , 2004, BMVC.

[8]  Chris H. Q. Ding,et al.  K-means clustering via principal component analysis , 2004, ICML.

[9]  Jitendra Malik,et al.  From contours to regions: An empirical evaluation , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.