Training Data Subset Search with Ensemble Active Learning.
暂无分享,去创建一个
[1] Remus Pop,et al. Deep Ensemble Bayesian Active Learning : Addressing the Mode Collapse issue in Monte Carlo dropout via Ensembles , 2018, ArXiv.
[2] Kaiyong Zhao,et al. AutoML: A Survey of the State-of-the-Art , 2019, Knowl. Based Syst..
[3] Zoubin Ghahramani,et al. Bayesian Convolutional Neural Networks with Bernoulli Approximate Variational Inference , 2015, ArXiv.
[4] Zoubin Ghahramani,et al. Deep Bayesian Active Learning with Image Data , 2017, ICML.
[5] Pietro Perona,et al. Microsoft COCO: Common Objects in Context , 2014, ECCV.
[6] Dahua Lin,et al. Lifelong Learning via Progressive Distillation and Retrospection , 2018, ECCV.
[7] Kirthevasan Kandasamy,et al. Tuning Hyperparameters without Grad Students: Scalable and Robust Bayesian Optimisation with Dragonfly , 2019, J. Mach. Learn. Res..
[8] Trevor Darrell,et al. Variational Adversarial Active Learning , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[9] In So Kweon,et al. Learning Loss for Active Learning , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[10] Kan Chen,et al. Billion-scale semi-supervised learning for image classification , 2019, ArXiv.
[11] Leo Breiman,et al. Bagging Predictors , 1996, Machine Learning.
[12] Robert C. Bolles,et al. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography , 1981, CACM.
[13] Jitendra Malik,et al. Are All Training Examples Created Equal? An Empirical Study , 2018, ArXiv.
[14] Graham W. Taylor,et al. Learning Confidence for Out-of-Distribution Detection in Neural Networks , 2018, ArXiv.
[15] Antonio Torralba,et al. Are all training examples equally valuable? , 2013, ArXiv.
[16] Charles Blundell,et al. Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles , 2016, NIPS.
[17] Andreas Krause,et al. Efficient High Dimensional Bayesian Optimization with Additivity and Quadrature Fourier Features , 2018, NeurIPS.
[18] D. Sculley,et al. Google Vizier: A Service for Black-Box Optimization , 2017, KDD.
[19] Liang Lin,et al. SNAS: Stochastic Neural Architecture Search , 2018, ICLR.
[20] Zoubin Ghahramani,et al. Bayesian Active Learning for Classification and Preference Learning , 2011, ArXiv.
[21] David A. Cohn,et al. Improving generalization with active learning , 1994, Machine Learning.
[22] Baharan Mirzasoleiman,et al. Selection Via Proxy: Efficient Data Selection For Deep Learning , 2019, ICLR.
[23] Mark J. F. Gales,et al. Predictive Uncertainty Estimation via Prior Networks , 2018, NeurIPS.
[24] Jose M. Alvarez,et al. Large-Scale Visual Active Learning with Deep Probabilistic Ensembles , 2018, ArXiv.
[25] Leo Breiman,et al. Random Forests , 2001, Machine Learning.
[26] Robert E. Schapire,et al. A Brief Introduction to Boosting , 1999, IJCAI.
[27] Yi Yang,et al. Searching for a Robust Neural Architecture in Four GPU Hours , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[28] Kilian Q. Weinberger,et al. Densely Connected Convolutional Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[29] Andrew McCallum,et al. Active Bias: Training More Accurate Neural Networks by Emphasizing High Variance Samples , 2017, NIPS.
[30] Frank Hutter,et al. Neural Architecture Search: A Survey , 2018, J. Mach. Learn. Res..
[31] Song Han,et al. AMC: AutoML for Model Compression and Acceleration on Mobile Devices , 2018, ECCV.
[32] Andrew McCallum,et al. Energy and Policy Considerations for Deep Learning in NLP , 2019, ACL.
[33] Greg Schohn,et al. Less is More: Active Learning with Support Vector Machines , 2000, ICML.
[34] David Lowell,et al. Practical Obstacles to Deploying Active Learning , 2019, EMNLP/IJCNLP.
[35] Li Fei-Fei,et al. ImageNet: A large-scale hierarchical image database , 2009, CVPR.
[36] Yarin Gal,et al. Understanding Measures of Uncertainty for Adversarial Example Detection , 2018, UAI.
[37] Yiming Yang,et al. DARTS: Differentiable Architecture Search , 2018, ICLR.
[38] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[39] Sebastian Nowozin,et al. Can You Trust Your Model's Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift , 2019, NeurIPS.
[40] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[41] Kilian Q. Weinberger,et al. Snapshot Ensembles: Train 1, get M for free , 2017, ICLR.
[42] Cees Snoek,et al. Active Transfer Learning with Zero-Shot Priors: Reusing Past Datasets for Future Tasks , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[43] Yoshua Bengio,et al. An Empirical Study of Example Forgetting during Deep Neural Network Learning , 2018, ICLR.
[44] Jian Sun,et al. Identity Mappings in Deep Residual Networks , 2016, ECCV.
[45] Luc Van Gool,et al. The Pascal Visual Object Classes (VOC) Challenge , 2010, International Journal of Computer Vision.
[46] Alex Kendall,et al. What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision? , 2017, NIPS.
[47] Silvio Savarese,et al. Active Learning for Convolutional Neural Networks: A Core-Set Approach , 2017, ICLR.
[48] Neil Houlsby,et al. Transfer Learning with Neural AutoML , 2018, NeurIPS.
[49] Tie-Yan Liu,et al. Neural Architecture Optimization , 2018, NeurIPS.
[50] Lars Kai Hansen,et al. Neural Network Ensembles , 1990, IEEE Trans. Pattern Anal. Mach. Intell..
[51] Hossein Mobahi,et al. Semantic Redundancies in Image-Classification Datasets: The 10% You Don't Need , 2019, ArXiv.
[52] Martial Hebert,et al. Adaptive Semantic Segmentation with a Strategic Curriculum of Proxy Labels , 2018, ArXiv.
[53] Joost van de Weijer,et al. Active Learning for Deep Detection Neural Networks , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[54] Song Han,et al. ADC: Automated Deep Compression and Acceleration with Reinforcement Learning , 2018, ArXiv.
[55] Zoubin Ghahramani,et al. Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning , 2015, ICML.
[56] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[57] Yarin Gal,et al. Sufficient Conditions for Idealised Models to Have No Adversarial Examples: a Theoretical and Empirical Study with Bayesian Neural Networks , 2018, 1806.00667.
[58] Burr Settles,et al. Active Learning Literature Survey , 2009 .
[59] Jose M. Alvarez,et al. Deep Probabilistic Ensembles: Approximate Variational Inference through KL Regularization , 2018, ArXiv.
[60] Kibok Lee,et al. Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples , 2017, ICLR.
[61] Chen Sun,et al. Revisiting Unreasonable Effectiveness of Data in Deep Learning Era , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[62] Pascal Fua,et al. Introducing Geometry in Active Learning for Image Segmentation , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[63] Alex Graves,et al. Practical Variational Inference for Neural Networks , 2011, NIPS.
[64] C. E. SHANNON,et al. A mathematical theory of communication , 1948, MOCO.
[65] Geoffrey E. Hinton,et al. Bayesian Learning for Neural Networks , 1995 .
[66] Finale Doshi-Velez,et al. Decomposition of Uncertainty in Bayesian Deep Learning for Efficient and Risk-sensitive Learning , 2017, ICML.