Image classification based on self-distillation
暂无分享,去创建一个
L. Qing | Yuting Li | Honggang Chen | Xiaohai He | Qiang Liu
[1] Ziji Ma,et al. Attention-based deep neural network for driver behavior recognition , 2022, Future Gener. Comput. Syst..
[2] Ning Zhang,et al. Multi-attention embedded network for salient object detection , 2021, Soft Computing.
[3] Yi Li,et al. Scale fusion light CNN for hyperspectral face recognition with knowledge distillation and attention mechanism , 2021, Applied Intelligence.
[4] Mingli Ding,et al. One-stage object detection knowledge distillation via adversarial learning , 2021, Applied Intelligence.
[5] Dah-Jye Lee,et al. A new multi-feature fusion based convolutional neural network for facial expression recognition , 2021, Applied Intelligence.
[6] Enmin Lu,et al. Image super-resolution via channel attention and spatial attention , 2021, Applied Intelligence.
[7] Deyu Meng,et al. EPSANet: An Efficient Pyramid Squeeze Attention Block on Convolutional Neural Network , 2021, ACCV.
[8] Wei Wang,et al. Pyramid-dilated deep convolutional neural network for crowd counting , 2021, Applied Intelligence.
[9] Quanfu Fan,et al. CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image Classification , 2021, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).
[10] Il-Chul Moon,et al. Refine Myself by Teaching Myself: Feature Refinement via Self-Knowledge Distillation , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[11] Jiashi Feng,et al. Coordinate Attention for Efficient Mobile Network Design , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[12] Zhanguo Wei,et al. TARDB-Net: triple-attention guided residual dense and BiLSTM networks for hyperspectral image classification , 2021, Multim. Tools Appl..
[13] D. Tao,et al. A Survey on Vision Transformer , 2020, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[14] Ralph R. Martin,et al. PCT: Point cloud transformer , 2020, Computational Visual Media.
[15] Xingyuan Wang,et al. Image Description With Polar Harmonic Fourier Moments , 2020, IEEE Transactions on Circuits and Systems for Video Technology.
[16] Hamid R. Rabiee,et al. Multiresolution Knowledge Distillation for Anomaly Detection , 2020, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[17] Qien Yu,et al. Mixture of experts with convolutional and variational autoencoders for anomaly detection , 2020, Applied Intelligence.
[18] N. Turk-Browne,et al. Attention recruits frontal cortex in human infants , 2020, Proceedings of the National Academy of Sciences.
[19] George D. Montanez,et al. An Information-Theoretic Perspective on Overfitting and Underfitting , 2020, Australasian Conference on Artificial Intelligence.
[20] Abd El Rahman Shabayek,et al. Deep network compression with teacher latent subspace learning and LASSO , 2020, Applied Intelligence.
[21] Chen Change Loy,et al. Knowledge Distillation Meets Self-Supervision , 2020, ECCV.
[22] Jianping Gou,et al. Knowledge Distillation: A Survey , 2020, International Journal of Computer Vision.
[23] Tao Wang,et al. Revisiting Knowledge Distillation via Label Smoothing Regularization , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[24] Zhao Zhang,et al. Bilateral Attention Network for RGB-D Salient Object Detection , 2020, IEEE Transactions on Image Processing.
[25] Jinwoo Shin,et al. Regularizing Class-Wise Predictions via Self-Knowledge Distillation , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[26] Di Jin,et al. The image-based analysis and classification of urine sediments using a LeNet-5 neural network , 2020, Comput. methods Biomech. Biomed. Eng. Imaging Vis..
[27] Yonglong Tian,et al. Contrastive Representation Distillation , 2019, ICLR.
[28] Yu Wang,et al. A decisive content based image retrieval approach for feature fusion in visual and textual images , 2019, Knowl. Based Syst..
[29] Greg Mori,et al. Similarity-Preserving Knowledge Distillation , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[30] Kaisheng Ma,et al. Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[31] Neil D. Lawrence,et al. Variational Information Distillation for Knowledge Transfer , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[32] Yan Lu,et al. Relational Knowledge Distillation , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[33] Yu Liu,et al. Correlation Congruence for Knowledge Distillation , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[34] Kai Zhao,et al. Res2Net: A New Multi-Scale Backbone Architecture , 2019, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[35] Chuan Zhang,et al. Ternary radial harmonic Fourier moments based robust stereo image zero-watermarking algorithm , 2019, Inf. Sci..
[36] Zhi Zhang,et al. Bag of Tricks for Image Classification with Convolutional Neural Networks , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[37] Xingyuan Wang,et al. Visual and textual information fusion using Kernel method for content based image retrieval , 2018, Inf. Fusion.
[38] In-So Kweon,et al. CBAM: Convolutional Block Attention Module , 2018, ECCV.
[39] Zachary Chase Lipton,et al. Born Again Neural Networks , 2018, ICML.
[40] Anastasios Tefas,et al. Learning Deep Representations with Probabilistic Knowledge Transfer , 2018, ECCV.
[41] Jangho Kim,et al. Paraphrasing Complex Network: Network Compression via Factor Transfer , 2018, NeurIPS.
[42] Tao Zhang,et al. Model Compression and Acceleration for Deep Neural Networks: The Principles, Progress, and Challenges , 2018, IEEE Signal Processing Magazine.
[43] Gang Sun,et al. Squeeze-and-Excitation Networks , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[44] Junmo Kim,et al. A Gift from Knowledge Distillation: Fast Optimization, Network Minimization and Transfer Learning , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[45] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[46] Xiaogang Wang,et al. Residual Attention Network for Image Classification , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[47] Zhuowen Tu,et al. Aggregated Residual Transformations for Deep Neural Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[48] Nikos Komodakis,et al. Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer , 2016, ICLR.
[49] Kilian Q. Weinberger,et al. Densely Connected Convolutional Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[50] Oriol Vinyals,et al. Matching Networks for One Shot Learning , 2016, NIPS.
[51] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[52] Geoffrey E. Hinton,et al. Distilling the Knowledge in a Neural Network , 2015, ArXiv.
[53] Samira Ebrahimi Kahou,et al. FitNets: Hints for Thin Deep Nets , 2014, ICLR.
[54] Xingyuan Wang,et al. The method for image retrieval based on multi-factors correlation utilizing block truncation coding , 2014, Pattern Recognit..
[55] Rich Caruana,et al. Model compression , 2006, KDD '06.
[56] Jun Zhang,et al. Convolutional Neural Network for Crowd Counting on Metro Platforms , 2021, Symmetry.
[57] Xingyuan Wang,et al. A novel method for image retrieval based on structure elements' descriptor , 2013, J. Vis. Commun. Image Represent..
[58] Xing-Yuan Wang,et al. An effective method for color image retrieval based on texture , 2012, Comput. Stand. Interfaces.
[59] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .