Practical applications that require some of the more advanced features of current visual models

While the use of visual models for assessing all aspects of the imaging chain is steadily increasing, one hindrance is the complexity of these models. This has impact in two ways - not only does it take longer to run the more complex visual model, making it difficult to place into optimization loops, but it also takes longer to code, test, and calibrate the model. As a result, a number of shortcut models have been proposed and used. Some of the shortcuts involve more efficient frequency transforms, such as using a Cartesian separable wavelet, while other types of shortcuts involve omitting the steps required to simulate certain visual mechanisms, such as masking. A key example of the latter is spatial CIELAB, which only models the opponent color CSFs and does not model the spatial frequency channels. Watson's recent analysis of the Modelfest data showed that while a multi-channel model did give the best performance, versions dispensing with the complex frequency bank and just using frequency attenuation did nearly as well. Of course, the Modelfest data addressed detection of a signal on a uniform field, so no masking properties were probed. On the other end of complexity is the model by D'Zmura, which not only includes radial and orientation channels, but also the interactions between the channels in both luminance and color. This talk will dissect several types of practical distortions that require more advanced visual models. One of these will be the need for orientation channels to predict edge jaggies due to aliasing. Other visual mechanisms in search of an exigent application that we will explore include cross luminance-chrominance masking and facilitation, local contrast, and cross-channel masking.

[1]  Patrick C. Teo,et al.  Perceptual image distortion , 1994, Electronic Imaging.

[2]  L A Olzak,et al.  Discrimination of Complex Patterns: Orientation Information is Integrated across Spatial Scale; Spatial-Frequency and Contrast Information are Not , 1997, Perception.

[3]  Robert Eriksson,et al.  Modeling the perception of digital images: a performance study , 1998, Electronic Imaging.

[4]  T. Caelli,et al.  Visual sensitivity to two-dimensional spatial phase. , 1982, Journal of the Optical Society of America.

[5]  Scott J. Daly,et al.  Visible differences predictor: an algorithm for the assessment of image fidelity , 1992, Electronic Imaging.

[6]  Miguel P. Eckstein,et al.  Image discrimination models predict signal detection in natural medical image backgrounds , 1997, Electronic Imaging.

[7]  C. A. Dvorak,et al.  Detection and discrimination of blur in edges and lines , 1981 .

[8]  N. Graham,et al.  Investigating simple and complex mechanisms in texture segregation using the speed-accuracy tradeoff method , 1995, Vision Research.

[9]  William T. Freeman,et al.  Presented at: 2nd Annual IEEE International Conference on Image , 1995 .

[10]  Mrm Marco Nijenhuis,et al.  Perceptual error measure for sampled and interpolated images , 1997 .

[11]  Michel Barlaud,et al.  Image coding using wavelet transform , 1992, IEEE Trans. Image Process..

[12]  Anthony M. Norcia,et al.  Modelfest: year one results and plans for future years , 2000, Electronic Imaging.

[13]  Wei Wu,et al.  Contrast gain control for color image quality , 1998, Electronic Imaging.

[14]  Andrew P. Bradley,et al.  A wavelet visible difference predictor , 1999, IEEE Trans. Image Process..

[15]  Robert F. Hess,et al.  Detection of contrast-defined shape , 2001 .

[16]  Wilson S. Geisler,et al.  Image quality assessment based on a degradation model , 2000, IEEE Trans. Image Process..

[17]  Karol Myszkowski,et al.  Perception-Based Fast Rendering and Antialiasing of Walkthrough Sequences , 2000, IEEE Trans. Vis. Comput. Graph..

[18]  Gary W. Meyer,et al.  A perceptually based adaptive sampling algorithm , 1998, SIGGRAPH.

[19]  A Bradley,et al.  Contrast dependence and mechanisms of masking interactions among chromatic and luminance gratings. , 1988, Journal of the Optical Society of America. A, Optics and image science.

[20]  Perceived structure of plaids implies variable combination of oriented filters in edge-finding. , 1996 .

[21]  R. Watt,et al.  The recognition and representation of edge blur: Evidence for spatial primitives in human vision , 1983, Vision Research.

[22]  Jeffrey Lubin,et al.  A VISUAL DISCRIMINATION MODEL FOR IMAGING SYSTEM DESIGN AND EVALUATION , 1995 .

[23]  A B Watson,et al.  Visual detection of spatial contrast patterns: evaluation of five simple models. , 2000, Optics express.

[24]  John G. Daugman,et al.  Complete discrete 2-D Gabor transforms by neural networks for image analysis and compression , 1988, IEEE Trans. Acoust. Speech Signal Process..

[25]  Donald P. Greenberg,et al.  A multiscale model of adaptation and spatial vision for realistic image display , 1998, SIGGRAPH.

[26]  Brian A. Wandell,et al.  A spatial extension of CIELAB for digital color‐image reproduction , 1997 .

[27]  Elaine W. Jin,et al.  The Development of A Color Visual Difference Model (CVDM) , 1998, PICS.

[28]  D. Heeger Normalization of cell responses in cat striate cortex , 1992, Visual Neuroscience.

[29]  Andrew B. Watson,et al.  The cortex transform: rapid computation of simulated neural images , 1987 .

[30]  David J. Field,et al.  Contour integration by the human visual system: Evidence for a local “association field” , 1993, Vision Research.