Inpainting quality assessment

We propose a means of objectively comparing the results of digital image inpainting algorithms by analyzing changes in predicted human attention prior to and following application. Artifacting is generalized in two catagories, in-region and out-region, depending on whether or not attention changes are primarily within the edited region or in nearby (contrasting) regions. Human qualitative scores are shown to correlate strongly with numerical scores of in-region and out-region artifacting, including the effectiveness of training supervised classifiers of increasing complexity. Results are shown on two novel human-scored datasets.

[1]  Christof Koch,et al.  A Model of Saliency-Based Visual Attention for Rapid Scene Analysis , 2009 .

[2]  Antonio Torralba,et al.  Contextual guidance of eye movements and attention in real-world scenes: the role of global features in object search. , 2006, Psychological review.

[3]  Patrick Pérez,et al.  Region filling and object removal by exemplar-based image inpainting , 2004, IEEE Transactions on Image Processing.

[4]  HongJiang Zhang,et al.  Contrast-based image attention analysis by using fuzzy growing , 2003, MULTIMEDIA '03.

[5]  Derrick J. Parkhurst,et al.  Modeling the role of salience in the allocation of overt visual attention , 2002, Vision Research.

[6]  Tony F. Chan,et al.  Nontexture Inpainting by Curvature-Driven Diffusions , 2001, J. Vis. Commun. Image Represent..

[7]  Laurent Itti,et al.  Congruence between model and human attention reveals unique signatures of critical visual events , 2007, NIPS.

[8]  Paul A. Ardis,et al.  Visual salience metrics for image inpainting , 2009, Electronic Imaging.

[9]  Guillermo Sapiro,et al.  Simultaneous structure and texture image inpainting , 2003, IEEE Trans. Image Process..

[10]  S Ullman,et al.  Shifts in selective visual attention: towards the underlying neural circuitry. , 1985, Human neurobiology.

[11]  Zhaoping Li A saliency map in primary visual cortex , 2002, Trends in Cognitive Sciences.

[12]  Assaf Zomet,et al.  Learning how to inpaint from global image statistics , 2003, Proceedings Ninth IEEE International Conference on Computer Vision.

[13]  Eero P. Simoncelli,et al.  Image quality assessment: from error visibility to structural similarity , 2004, IEEE Transactions on Image Processing.

[14]  Johan A. K. Suykens,et al.  Least Squares Support Vector Machines , 2002 .

[15]  Neil W. Bergmann,et al.  An automatic image quality assessment technique incorporating higher level perceptual factors , 1998, Proceedings 1998 International Conference on Image Processing. ICIP98 (Cat. No.98CB36269).

[16]  A. Treisman,et al.  A feature-integration theory of attention , 1980, Cognitive Psychology.

[17]  Guillermo Sapiro,et al.  Image inpainting , 2000, SIGGRAPH.

[18]  Wilson S. Geisler,et al.  Image quality assessment based on a degradation model , 2000, IEEE Trans. Image Process..

[19]  Scott J. Daly,et al.  Visible differences predictor: an algorithm for the assessment of image fidelity , 1992, Electronic Imaging.

[20]  S. Shipp The brain circuitry of attention , 2004, Trends in Cognitive Sciences.

[21]  Sayan Mukherjee,et al.  Choosing Multiple Parameters for Support Vector Machines , 2002, Machine Learning.