What the eye did not See - a Fusion Approach to Image coding

The concentration of the cones and ganglion cells is much higher in the fovea than the rest of the retina. This non-uniform sampling results in a retinal image that is sharp at the fixation point, where a person is looking, and blurred away from it. This difference between the sampling rates at the different spatial locations presents us with the question of whether we can employ this biological characteristic to achieve better image compression. This can be achieved by compressing an image less at the fixation point and more away from it. It is, however, known that the vision system employs more that one fixation to look at a single scene which presents us with the problem of combining images pertaining to the same scene but exhibiting different spatial contrasts. This article presents an algorithm to combine such a series of images by using image fusion in the gradient domain. The advantage of the algorithm is that unlike other algorithms that compress the image in the spatial domain our algorithm results in no artifacts. The algorithm is based on two steps, in the first we modify the gradients of an image based on a limited number of fixations and in the second we integrate the modified gradient. Results based on measured and predicted fixations verify our approach.

[1]  Chiou-Shann Fuh,et al.  Automatic White Balance for Digital Still Cameras , 2006, J. Inf. Sci. Eng..

[2]  S J Anderson,et al.  Peripheral spatial vision: limits imposed by optics, photoreceptors, and receptor pooling. , 1991, Journal of the Optical Society of America. A, Optics and image science.

[3]  Rama Chellappa,et al.  A Method for Enforcing Integrability in Shape from Shading Algorithms , 1988, IEEE Trans. Pattern Anal. Mach. Intell..

[4]  Wilson S. Geisler,et al.  Real-time foveated multiresolution system for low-bandwidth video communication , 1998, Electronic Imaging.

[5]  Frédo Durand,et al.  Learning to predict where humans look , 2009, 2009 IEEE 12th International Conference on Computer Vision.

[6]  Benjamin W Tatler,et al.  The central fixation bias in scene viewing: selecting an optimal viewing position independently of motor biases and image feature distributions. , 2007, Journal of vision.

[7]  K Suder,et al.  The Control of Low-Level Information Flow in the Visual System , 2000, Reviews in the neurosciences.

[8]  Puneet Sharma,et al.  Validating the Visual Saliency Model , 2013, SCIA.

[9]  Christof Koch,et al.  A Model of Saliency-Based Visual Attention for Rapid Scene Analysis , 2009 .

[10]  Zhou Wang,et al.  Embedded foveation image coding , 2001, IEEE Trans. Image Process..

[11]  Iain D. Gilchrist,et al.  Visual correlates of fixation selection: effects of scale and time , 2005, Vision Research.

[12]  Wilson S. Geisler,et al.  Visual detection following retinal damage: predictions of an inhomogeneous retino-cortical model , 1996, Photonics West.

[13]  J. Hurley,et al.  Shedding Light on Adaptation , 2002, The Journal of general physiology.

[14]  S Ullman,et al.  Shifts in selective visual attention: towards the underlying neural circuitry. , 1985, Human neurobiology.

[15]  Patrick Le Callet,et al.  A coherent computational approach to model bottom-up visual attention , 2006, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[16]  A. Treisman,et al.  A feature-integration theory of attention , 1980, Cognitive Psychology.

[17]  Lawrence K. Cormack,et al.  4.1 – Computational Models of Early Human Vision , 2005 .

[18]  C. Koch,et al.  Computational modelling of visual attention , 2001, Nature Reviews Neuroscience.

[19]  Alan C. Bovik,et al.  GAFFE: A Gaze-Attentive Fixation Finding Engine , 2008, IEEE Transactions on Image Processing.

[20]  Min Chen,et al.  Tone Mapping for HDR Image using Optimization A New Closed Form Solution , 2006, 18th International Conference on Pattern Recognition (ICPR'06).