DANICE: Domain adaptation without forgetting in neural image compression

Neural image compression (NIC) is a new coding paradigm where coding capabilities are captured by deep models learned from data. This data-driven nature enables new potential functionalities. In this paper, we study the adaptability of codecs to custom domains of interest. We show that NIC codecs are transferable and that they can be adapted with relatively few target domain images. However, naive adaptation interferes with the solution optimized for the original source domain, resulting in forgetting the original coding capabilities in that domain, and may even break the compatibility with previously encoded bitstreams. Addressing these problems, we propose Codec Adaptation without Forgetting (CAwF), a framework that can avoid these problems by adding a small amount of custom parameters, where the source codec remains embedded and unchanged during the adaptation process. Experiments demonstrate its effectiveness and provide useful insights on the characteristics of catastrophic interference in NIC.

[1]  Joost van de Weijer,et al.  Variable Rate Deep Image Compression With Modulated Autoencoder , 2019, IEEE Signal Processing Letters.

[2]  Valero Laparra,et al.  Density Modeling of Images using a Generalized Normalization Transformation , 2015, ICLR.

[3]  Gregory K. Wallace,et al.  The JPEG still picture compression standard , 1991, CACM.

[4]  Xiaoyun Zhang,et al.  Efficient Variable Rate Image Compression With Multi-Scale Decomposition Network , 2019, IEEE Transactions on Circuits and Systems for Video Technology.

[5]  Luis Herranz,et al.  Slimmable Compressive Autoencoders for Practical Neural Image Compression , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[6]  Elad Eban,et al.  Computationally Efficient Neural Image Compression , 2019, ArXiv.

[7]  Stefan Wermter,et al.  Continual Lifelong Learning with Neural Networks: A Review , 2019, Neural Networks.

[8]  P. Alam ‘S’ , 2021, Composites Engineering: An A–Z Guide.

[9]  David Minnen,et al.  Variational image compression with a scale hyperprior , 2018, ICLR.

[10]  Mei Wang,et al.  Deep Visual Domain Adaptation: A Survey , 2018, Neurocomputing.

[11]  David Minnen,et al.  Variable Rate Image Compression with Recurrent Neural Networks , 2015, ICLR.

[12]  David Minnen,et al.  Full Resolution Image Compression with Recurrent Neural Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[13]  Atsuto Maki,et al.  Factors of Transferability for a Generic ConvNet Representation , 2014, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[14]  Michael McCloskey,et al.  Catastrophic Interference in Connectionist Networks: The Sequential Learning Problem , 1989 .

[15]  Valero Laparra,et al.  End-to-end Optimized Image Compression , 2016, ICLR.

[16]  G. G. Stokes "J." , 1890, The New Yale Book of Quotations.

[17]  Gary J. Sullivan,et al.  Overview of the High Efficiency Video Coding (HEVC) Standard , 2012, IEEE Transactions on Circuits and Systems for Video Technology.

[18]  Lucas Theis,et al.  Lossy Image Compression with Compressive Autoencoders , 2017, ICLR.

[19]  Jungwon Lee,et al.  Variable Rate Deep Image Compression With a Conditional Autoencoder , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[20]  Ajay Luthra,et al.  Overview of the H.264/AVC video coding standard , 2003, IEEE Trans. Circuits Syst. Video Technol..