Robust and Efficient Saliency Modeling from Image Co-occurrence Histograms.
暂无分享,去创建一个
This paper presents a visual saliency modeling technique that is efficient and tolerant to the image scale variation. Different from existing approaches that rely on a large number of filters or complicated learning processes, the proposed technique computes saliency from image histograms. Several two-dimensional image co-occurrence histograms are used, which encode not only "how many" (occurrence) but also "where and how" (co-occurrence) image pixels are composed into a visual image, hence capturing the "unusualness" of an object or image region that is often perceived by either global "uncommonness" (i.e., low occurrence frequency) or local "discontinuity" with respect to the surrounding (i.e., low co-occurrence frequency). The proposed technique has a number of advantageous characteristics. It is fast and very easy to implement. At the same time, it involves minimal parameter tuning, requires no training, and is robust to image scale variation. Experiments on the AIM dataset show that a superior shuffled AUC (sAUC) of 0.7221 is obtained, which is higher than the state-of-the-art sAUC of 0.7187.