A Novel Foveated-JND Profile Based on an Adaptive Foveated Weighting Model

The aftereffect of visual attention (VA) and foveated masking (FM) are two important HVS characteristics not fully exploited in conventional foveated JND (FJND) models. This paper presents a fixation point estimation method to build adaptive foveated weighting model, by which an improved FJND profile is developed with VA and FM effects considered. The proposed FJND profile overcomes two limitations of the conventional FJND profiles. On one hand, conventional FJND profiles can’t get accurate positions of fixation points. This paper proposes a fixation prediction algorithm based on the distribution of cones to identify fixation points and regions. On the other hand, conventional FJND profiles did not incorporate the VA effect. By fully exploiting the VA and FM effects, this work proposes an adaptive foveated weighting model, which is developed as a function of the fixation intensity and retinal eccentricity. Here, the fixation intensity is estimated from the saliency map which is modelled by the Gaussian Mixed Model. With the proposed weight model, a new FJND profile is developed. Experimental results show that the proposed profile tolerates more distortion at the same perceptual image quality compared with other JND profiles.

[1]  Andrew B. Watson,et al.  DCTune: A TECHNIQUE FOR VISUAL OPTIMIZATION OF DCT QUANTIZATION MATRICES FOR INDIVIDUAL IMAGES. , 1993 .

[2]  Munchurl Kim,et al.  A DCT-Based Total JND Profile for Spatiotemporal and Foveated Masking Effects , 2017, IEEE Transactions on Circuits and Systems for Video Technology.

[3]  Wilson S. Geisler,et al.  Real-time foveated multiresolution system for low-bandwidth video communication , 1998, Electronic Imaging.

[4]  Susu Yao,et al.  Just noticeable distortion model and its applications in video coding , 2005, Signal Process. Image Commun..

[5]  Christine Guillemot,et al.  Perceptually-Friendly H.264/AVC Video Coding Based on Foveated Just-Noticeable-Distortion Model , 2010, IEEE Transactions on Circuits and Systems for Video Technology.

[6]  Guangming Shi,et al.  A Novel Just Noticeable Difference Model via Orientation Regularity in DCT Domain , 2017, IEEE Access.

[7]  C. Koch,et al.  A saliency-based search mechanism for overt and covert shifts of visual attention , 2000, Vision Research.

[8]  Frédo Durand,et al.  Learning to predict where humans look , 2009, 2009 IEEE 12th International Conference on Computer Vision.

[9]  H. Inoue,et al.  A study on eye fixation time distribution with and without subjective evaluation of food and related pictures , 2012, 2012 Proceedings of SICE Annual Conference (SICE).

[10]  Wen Gao,et al.  Just-Noticeable Difference-Based Perceptual Optimization for JPEG Compression , 2017, IEEE Signal Processing Letters.

[11]  Guangming Shi,et al.  Enhanced Just Noticeable Difference Model for Images With Pattern Complexity , 2017, IEEE Transactions on Image Processing.

[12]  Heidi A. Peterson,et al.  Luminance-model-based DCT quantization for color image compression , 1992, Electronic Imaging.