Variable Rate Image Compression Method with Dead-zone Quantizer

Deep learning based image compression methods have achieved superior performance compared with transform based conventional codec. With end-to-end Rate-Distortion Optimization (RDO) in the codec, compression model is optimized with Lagrange multiplier λ. For conventional codec, signal is decorrelated with orthonormal transformation, and uniform quantizer is introduced. We propose a variable rate image compression method with dead-zone quantizer. Firstly, the autoencoder network is trained with RaDOGAGA [6] framework, which can make the latents isometric to the metric space, such as SSIM and MSE. Then the conventional dead-zone quantization method with arbitrary step size is used in the common trained network to provide the flexible rate control. With dead-zone quantizer, the experimental results show that our method performs comparably with independently optimized models within a wide range of bitrate.

[1]  Lucas Theis,et al.  Lossy Image Compression with Compressive Autoencoders , 2017, ICLR.

[2]  Georg Martius,et al.  Variational Autoencoders Pursue PCA Directions (by Accident) , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[3]  David Minnen,et al.  Variational image compression with a scale hyperprior , 2018, ICLR.

[4]  Luc Van Gool,et al.  Conditional Probability Models for Deep Image Compression , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[5]  Jungwon Lee,et al.  Variable Rate Deep Image Compression With a Conditional Autoencoder , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[6]  Jooyoung Lee,et al.  Context-adaptive Entropy Model for End-to-end Optimized Image Compression , 2018, ICLR.

[7]  Jing Zhou,et al.  Multi-scale and Context-adaptive Entropy Model for Image Compression , 2019, CVPR Workshops.

[8]  Michael W. Marcellin,et al.  JPEG2000 - image compression fundamentals, standards and practice , 2013, The Kluwer international series in engineering and computer science.

[9]  David Minnen,et al.  Joint Autoregressive and Hierarchical Priors for Learned Image Compression , 2018, NeurIPS.

[10]  Joost van de Weijer,et al.  Variable Rate Deep Image Compression With Modulated Autoencoder , 2020, IEEE Signal Processing Letters.

[11]  Valero Laparra,et al.  End-to-end Optimized Image Compression , 2016, ICLR.

[12]  Gregory K. Wallace,et al.  The JPEG still picture compression standard , 1991, CACM.

[13]  Thomas Wedi,et al.  Quantization offsets for video coding , 2005, 2005 IEEE International Symposium on Circuits and Systems.

[14]  Eleni Bakali Self-reducible with easy decision version counting problems admit additive error approximation. Connections to counting complexity, exponential time complexity, and circuit lower bounds , 2016, ArXiv.

[15]  Jing Zhou,et al.  Rate-distortion optimization guided autoencoder for isometric embedding in Euclidean latent space , 2020, ICML.

[16]  Vivek K. Goyal,et al.  Theoretical foundations of transform coding , 2001, IEEE Signal Process. Mag..

[17]  Sihan Wen,et al.  Variational Autoencoder based Image Compression with Pyramidal Features and Context Entropy Model , 2019, CVPR Workshops.

[18]  David Minnen,et al.  Full Resolution Image Compression with Recurrent Neural Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).