Portable Camera-Based Assistive Textan product Label Reading from Hand-Held objects for Blind Persons

A camera-based assistive text reading framework to help blind persons to read the text labels and product packaging from hand-held objects in their day to day life. To separate the object from jumbled background or preceding neighbouring objects in the camera vision, we initially propose an efficient and effective motion based method to define a district of interest (ROI) in the video by ask the consumer to tremble the object. This scheme extracts moving object region by a mixture-of-Gaussians-based background subtraction technique. In the extract ROI, text localization and recognition are conduct to obtain text details. To mechanically spotlight the text regions from the object ROI, we suggest a novel text localization algorithm by knowledge grade description of stroke orientations and distributions of edge pixels in an Ad a boost model.