Weight Decay Scheduling and Knowledge Distillation for Active Learning
暂无分享,去创建一个
[1] Mark Craven,et al. Multiple-Instance Active Learning , 2007, NIPS.
[2] Yoshua Bengio,et al. FitNets: Hints for Thin Deep Nets , 2014, ICLR.
[3] In So Kweon,et al. Learning Loss for Active Learning , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[4] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[5] Joachim Denzler,et al. Active and Continuous Exploration with Deep Neural Networks and Expected Model Output Changes , 2016, ArXiv.
[6] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[7] Dan Roth,et al. Margin-Based Active Learning for Structured Output Spaces , 2006, ECML.
[8] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[9] Kilian Q. Weinberger,et al. Densely Connected Convolutional Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[10] Kristen Grauman,et al. Large-Scale Live Active Learning: Training Object Detectors with Crawled Data and Crowds , 2011, CVPR 2011.
[11] Joachim Denzler,et al. Selecting Influential Examples: Active Learning with Expected Model Output Changes , 2014, ECCV.
[12] Anima Anandkumar,et al. Deep Active Learning for Named Entity Recognition , 2017, Rep4NLP@ACL.
[13] Hao Li,et al. Visualizing the Loss Landscape of Neural Nets , 2017, NeurIPS.
[14] Rich Caruana,et al. Do Deep Nets Really Need to be Deep? , 2013, NIPS.
[15] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[16] Daphne Koller,et al. Support Vector Machine Active Learning with Applications to Text Classification , 2000, J. Mach. Learn. Res..
[17] David D. Lewis,et al. Heterogeneous Uncertainty Sampling for Supervised Learning , 1994, ICML.
[18] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[19] Andreas Nürnberger,et al. The Power of Ensembles for Active Learning in Image Classification , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[20] Arnold W. M. Smeulders,et al. Active learning using pre-clustering , 2004, ICML.
[21] Kaiming He,et al. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[22] Elad Hoffer,et al. Norm matters: efficient and accurate normalization schemes in deep networks , 2018, NeurIPS.
[23] Yuhong Guo,et al. Active Instance Sampling via Matrix Partition , 2010, NIPS.
[24] Junmo Kim,et al. A Gift from Knowledge Distillation: Fast Optimization, Network Minimization and Transfer Learning , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[25] Kilian Q. Weinberger,et al. Deep Networks with Stochastic Depth , 2016, ECCV.
[26] Mark Craven,et al. An Analysis of Active Learning Strategies for Sequence Labeling Tasks , 2008, EMNLP.
[27] Raquel Urtasun,et al. Latent Structured Active Learning , 2013, NIPS.
[28] Sangdoo Yun,et al. A Comprehensive Overhaul of Feature Distillation , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[29] Edward Y. Chang,et al. Support vector machine active learning for image retrieval , 2001, MULTIMEDIA '01.
[30] Quoc V. Le,et al. Randaugment: Practical automated data augmentation with a reduced search space , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[31] Anders Krogh,et al. A Simple Weight Decay Can Improve Generalization , 1991, NIPS.
[32] Jorge Nocedal,et al. On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima , 2016, ICLR.
[33] William A. Gale,et al. A sequential algorithm for training text classifiers , 1994, SIGIR '94.
[34] Nikolaos Papanikolopoulos,et al. Multi-class active learning for image classification , 2009, CVPR.
[35] Yoshua Bengio,et al. Deep Sparse Rectifier Neural Networks , 2011, AISTATS.
[36] Allen Y. Yang,et al. A Convex Optimization Framework for Active Learning , 2013, 2013 IEEE International Conference on Computer Vision.
[37] Quoc V. Le,et al. Self-Training With Noisy Student Improves ImageNet Classification , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[38] Ruimao Zhang,et al. Cost-Effective Active Learning for Deep Image Classification , 2017, IEEE Transactions on Circuits and Systems for Video Technology.
[39] Tom Drummond,et al. The Importance of Metric Learning for Robotic Vision: Open Set Recognition and Active Learning , 2019, 2019 International Conference on Robotics and Automation (ICRA).
[40] Jangho Kim,et al. Paraphrasing Complex Network: Network Compression via Factor Transfer , 2018, NeurIPS.