暂无分享,去创建一个
Songnan Li | Wei Wang | Wei Jiang | Shan Liu
[1] Akshay Pushparaja,et al. CompressAI: a PyTorch library and evaluation platform for end-to-end compression research , 2020, ArXiv.
[2] Valero Laparra,et al. End-to-end Optimized Image Compression , 2016, ICLR.
[3] Eirikur Agustsson,et al. Universally Quantized Neural Compression , 2020, NeurIPS.
[4] Nam Ik Cho,et al. Meta-Transfer Learning for Zero-Shot Super-Resolution , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[5] Yoram Singer,et al. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization , 2011, J. Mach. Learn. Res..
[6] David Minnen,et al. Variational image compression with a scale hyperprior , 2018, ICLR.
[7] David Minnen,et al. Variable Rate Image Compression with Recurrent Neural Networks , 2015, ICLR.
[8] David Minnen,et al. Full Resolution Image Compression with Recurrent Neural Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[9] Zhizheng Zhang,et al. 3-D Context Entropy Model for Improved Practical Image Compression , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[10] Steven C. H. Hoi,et al. Online Deep Learning: Learning Deep Neural Networks on the Fly , 2017, IJCAI.
[11] Andre Wibisono,et al. Streaming Variational Bayes , 2013, NIPS.
[12] Sergey Levine,et al. Online Meta-Learning , 2019, ICML.
[13] Jungwon Lee,et al. Variable Rate Deep Image Compression With a Conditional Autoencoder , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[14] Sergey Levine,et al. Deep Online Learning via Meta-Learning: Continual Adaptation for Model-Based RL , 2018, ICLR.
[15] Pieter Abbeel,et al. A Simple Neural Attentive Meta-Learner , 2017, ICLR.
[16] Jooyoung Lee,et al. Context-adaptive Entropy Model for End-to-end Optimized Image Compression , 2018, ICLR.
[17] Gregory K. Wallace,et al. The JPEG still picture compression standard , 1991, CACM.
[18] Sergey Levine,et al. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks , 2017, ICML.
[19] David Minnen,et al. Joint Autoregressive and Hierarchical Priors for Learned Image Compression , 2018, NeurIPS.
[20] Luc Van Gool,et al. Conditional Probability Models for Deep Image Compression , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[21] Joost van de Weijer,et al. Variable Rate Deep Image Compression With Modulated Autoencoder , 2019, IEEE Signal Processing Letters.
[22] Gary J. Sullivan,et al. Overview of the High Efficiency Video Coding (HEVC) Standard , 2012, IEEE Transactions on Circuits and Systems for Video Technology.
[23] Richard S. Zemel,et al. Prototypical Networks for Few-shot Learning , 2017, NIPS.
[24] Amos J. Storkey,et al. How to train your MAML , 2018, ICLR.
[25] Songnan Li,et al. PnG: Micro-structured Prune-and-Grow Networks for Flexible Image Restoration , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[26] Brian Kulis,et al. Substitutional Neural Image Compression , 2021, 2022 Picture Coding Symposium (PCS).
[27] Edwin Pan,et al. MetaHDR: Model-Agnostic Meta-Learning for HDR Image Reconstruction , 2021, ArXiv.
[28] Sergey Levine,et al. Learning to Adapt in Dynamic, Real-World Environments through Meta-Reinforcement Learning , 2018, ICLR.
[29] Zhou Wang,et al. Multi-scale structural similarity for image quality assessment , 2003 .
[30] Bindhu,et al. Learned Image Compression with Discretized Gaussian Mixture Likelihoods and Attention Modules , 2021, December 2020.
[31] Yan Wang,et al. Checkerboard Context Model for Efficient Learned Image Compression , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[32] Majid Rabbani,et al. An overview of the JPEG 2000 still image compression standard , 2002, Signal Process. Image Commun..