S-CCR: Super-Complete Comparative Representation for Low-Light Image Quality Inference In-the-wild

With the rapid development of weak-illumination imaging technology, low-light images have brought new challenges to quality of experience and service. However, developing a robust quality indicator for authentic low-light distortions in-the-wild remains a major challenge in practical quality control systems. In this paper, we develop a new super-complete comparative representation (S-CCR) for the region-level quality inference of low-light images. Specifically, we excavate the color, luminance, and detail quality evidence for the feature embedding guidance of comparative representation based on the human visual characteristics. Moreover, we decompose the inputs into a super-complete feature group so that the image quality of each region can be fully represented, which allows to preserve the distinctiveness, distinguishability, and consistency. Finally, we further establish a comparative domain alignment method, so that the comparative representation of an unseen image can be aligned with respect to the quality features of already-seen ones. Extensive experiments on the benchmark dataset validate the superiority of our S-CCR over 11 competing methods on authentic distortions.

[1]  Yijing Huang,et al.  Low-Light Images In-the-Wild: A Novel Visibility Perception-Guided Blind Quality Indicator , 2023, IEEE Transactions on Industrial Informatics.

[2]  Shijie Hao,et al.  Decoupled Low-Light Image Enhancement , 2021, ACM Trans. Multim. Comput. Commun. Appl..

[3]  Chia-Wen Lin,et al.  Consistency-Constancy Bi-Knowledge Learning for Pedestrian Detection in Night Surveillance , 2021, ACM Multimedia.

[4]  King Ngi Ngan,et al.  Remember and Reuse: Cross-Task Blind Image Quality Assessment via Relevance-aware Incremental Learning , 2021, ACM Multimedia.

[5]  Boxin Shi,et al.  DeLiEve-Net: Deblurring Low-light Images with Light Streaks and Local Events , 2021, 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW).

[6]  Peyman Milanfar,et al.  MUSIQ: Multi-scale Image Quality Transformer , 2021, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).

[7]  Miaohui Wang,et al.  Blind Quality Assessment of Night-Time Images Via Weak Illumination Analysis , 2021, 2021 IEEE International Conference on Multimedia and Expo (ICME).

[8]  Chen Change Loy,et al.  Low-Light Image and Video Enhancement Using Deep Learning: A Survey , 2021, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[9]  Cheolkon Jung,et al.  Joint Contrast Enhancement and Noise Reduction of Low Light Images Via JND Transform , 2020, IEEE Transactions on Multimedia.

[10]  Wenhan Yang,et al.  Integrating Semantic Segmentation and Retinex Model for Low-Light Image Enhancement , 2020, ACM Multimedia.

[11]  Zhenan Sun,et al.  Black Re-ID: A Head-shoulder Descriptor for the Challenging Problem of Person Re-Identification , 2020, ACM Multimedia.

[12]  Zheng-Jun Zha,et al.  Nighttime Dehazing with a Synthetic Benchmark , 2020, ACM Multimedia.

[13]  Tingting Jiang,et al.  Norm-in-Norm Loss with Faster Convergence and Better Performance for Image Quality Assessment , 2020, ACM Multimedia.

[14]  Kede Ma,et al.  Perceptual Quality Assessment of Smartphone Photography , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[15]  Tao Xiang,et al.  Blind Night-Time Image Quality Assessment: Subjective and Objective Approaches , 2020, IEEE Transactions on Multimedia.

[16]  Lanfen Lin,et al.  UNet 3+: A Full-Scale Connected UNet for Medical Image Segmentation , 2020, ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[17]  Guangming Shi,et al.  MetaIQA: Deep Meta-Learning for No-Reference Image Quality Assessment , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[18]  Weisi Lin,et al.  SGDNet: An End-to-End Saliency-Guided Deep Neural Network for No-Reference Image Quality Assessment , 2019, ACM Multimedia.

[19]  Zhiwei Xiong,et al.  Progressive Retinex: Mutually Reinforced Illumination-Noise Perception Network for Low-Light Image Enhancement , 2019, ACM Multimedia.

[20]  Ahmad El Sallab,et al.  FuseMODNet: Real-Time Camera and LiDAR Based Moving Object Detection for Robust Low-Light Autonomous Driving , 2019, 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW).

[21]  Phillip Isola,et al.  Contrastive Multiview Coding , 2019, ECCV.

[22]  Xiaojie Guo,et al.  Kindling the Darkness: A Practical Low-light Image Enhancer , 2019, ACM Multimedia.

[23]  Luc Van Gool,et al.  Guided Curriculum Model Adaptation and Uncertainty-Aware Evaluation for Semantic Nighttime Image Segmentation , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[24]  Jongyoo Kim,et al.  Deep CNN-Based Blind Image Quality Predictor , 2019, IEEE Transactions on Neural Networks and Learning Systems.

[25]  Tor D. Wager,et al.  Emotion schemas are embedded in the human visual system , 2018, Science Advances.

[26]  Hong Cai,et al.  PieAPP: Perceptual Image-Error Assessment Through Pairwise Preference , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[27]  Stella X. Yu,et al.  Unsupervised Feature Learning via Non-parametric Instance Discrimination , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[28]  Mohan S. Kankanhalli,et al.  Emotional Attention: A Study of Image Sentiment and Visual Attention , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[29]  Yuan Zhang,et al.  Blind Predicting Similar Quality Map for Image Quality Assessment , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[30]  Mei Wang,et al.  Deep Visual Domain Adaptation: A Survey , 2018, Neurocomputing.

[31]  Wenjun Zhang,et al.  No-Reference Quality Metric of Contrast-Distorted Images Based on Information Maximization , 2017, IEEE Transactions on Cybernetics.

[32]  Joost van de Weijer,et al.  RankIQA: Learning from Rankings for No-Reference Image Quality Assessment , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[33]  Sanghoon Lee,et al.  Deep Learning of Human Visual Sensitivity in Image Quality Assessment Framework , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[34]  Guangming Shi,et al.  Enhanced Just Noticeable Difference Model for Images With Pattern Complexity , 2017, IEEE Transactions on Image Processing.

[35]  Wenguan Wang,et al.  Deep Visual Attention Prediction , 2017, IEEE Transactions on Image Processing.

[36]  Sebastian Bosse,et al.  Deep Neural Networks for No-Reference and Full-Reference Image Quality Assessment , 2016, IEEE Transactions on Image Processing.

[37]  Xiaojie Guo,et al.  LIME: A Method for Low-light IMage Enhancement , 2016, ACM Multimedia.

[38]  Weisi Lin,et al.  No-Reference Quality Assessment for Multiply-Distorted Images in Gradient Domain , 2016, IEEE Signal Processing Letters.

[39]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[40]  Lei Zhang,et al.  A Feature-Enriched Completely Blind Image Quality Evaluator , 2015, IEEE Transactions on Image Processing.

[41]  Hua Huang,et al.  No-reference image quality assessment based on spatial and spectral entropies , 2014, Signal Process. Image Commun..

[42]  Yi Li,et al.  Convolutional Neural Networks for No-Reference Image Quality Assessment , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition.

[43]  Xiaochun Cao,et al.  Cluster-Based Co-Saliency Detection , 2013, IEEE Transactions on Image Processing.

[44]  Alan C. Bovik,et al.  No-Reference Image Quality Assessment in the Spatial Domain , 2012, IEEE Transactions on Image Processing.

[45]  Shi-Min Hu,et al.  Global contrast based salient region detection , 2011, CVPR 2011.

[46]  Miaohui Wang,et al.  Perceptual Redundancy Estimation of Screen Images via Multi-Domain Sensitivities , 2021, IEEE Signal Processing Letters.