暂无分享,去创建一个
Frank Hutter | Danny Stoll | Jorg K.H. Franke | Diane Wagner | Simon Selg | F. Hutter | Danny Stoll | J. Franke | Diane Wagner | Simon Selg | Daniel Stoll
[1] Tianqi Chen,et al. Net2Net: Accelerating Learning via Knowledge Transfer , 2015, ICLR.
[2] Joaquin Vanschoren,et al. Meta-Learning: A Survey , 2018, Automated Machine Learning.
[3] Kevin Leyton-Brown,et al. An Efficient Approach for Assessing Hyperparameter Importance , 2014, ICML.
[4] Sebastian Thrun,et al. Lifelong robot learning , 1993, Robotics Auton. Syst..
[5] Eytan Bakshy,et al. Scalable Meta-Learning for Bayesian Optimization , 2018, ArXiv.
[6] Yi Yang,et al. NAS-Bench-201: Extending the Scope of Reproducible Neural Architecture Search , 2020, ICLR.
[7] Hui Xiong,et al. A Comprehensive Survey on Transfer Learning , 2021, Proceedings of the IEEE.
[8] Aaron Klein,et al. BOHB: Robust and Efficient Hyperparameter Optimization at Scale , 2018, ICML.
[9] Svetha Venkatesh,et al. Regret Bounds for Transfer Learning in Bayesian Optimisation , 2017, AISTATS.
[10] Oren Etzioni,et al. Green AI , 2019, Commun. ACM.
[11] Tinne Tuytelaars,et al. A Continual Learning Survey: Defying Forgetting in Classification Tasks , 2019, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[12] Taghi M. Khoshgoftaar,et al. A survey on heterogeneous transfer learning , 2017, Journal of Big Data.
[13] Kevin Leyton-Brown,et al. Surrogate Benchmarks for Hyperparameter Optimization , 2014, MetaSel@ECAI.
[14] Xavier Bouthillier,et al. Survey of machine-learning experimental methods at NeurIPS2019 and ICLR2020 , 2020 .
[15] Holger H. Hoos,et al. Programming by optimization , 2012, Commun. ACM.
[16] Aaron Klein,et al. Hyperparameter Optimization , 2017, Encyclopedia of Machine Learning and Data Mining.
[17] Jakub W. Pachocki,et al. Dota 2 with Large Scale Deep Reinforcement Learning , 2019, ArXiv.
[18] Ser-Nam Lim,et al. A Metric Learning Reality Check , 2020, ECCV.
[19] Jasper Snoek,et al. Practical Bayesian Optimization of Machine Learning Algorithms , 2012, NIPS.
[20] Stefano Soatto,et al. Rethinking the Hyperparameters for Fine-tuning , 2020, ICLR.
[21] David E. Goldberg,et al. Multi-objective bayesian optimization algorithm , 2002 .
[22] Nando de Freitas,et al. Bayesian Optimization in AlphaGo , 2018, ArXiv.
[23] Andrew McCallum,et al. Energy and Policy Considerations for Deep Learning in NLP , 2019, ACL.
[24] Matthias W. Seeger,et al. Scalable Hyperparameter Transfer Learning , 2018, NeurIPS.
[25] Kaiming He,et al. Group Normalization , 2018, ECCV.
[26] Leslie Pack Kaelbling,et al. Regret bounds for meta Bayesian optimization with an unknown Gaussian process prior , 2018, NeurIPS.
[27] Jasper Snoek,et al. Multi-Task Bayesian Optimization , 2013, NIPS.
[28] David D. Cox,et al. Making a Science of Model Search: Hyperparameter Optimization in Hundreds of Dimensions for Vision Architectures , 2013, ICML.
[29] Håkan Grahn,et al. A Case for Guided Machine Learning , 2019, CD-MAKE.
[30] Wolfram Burgard,et al. Most likely heteroscedastic Gaussian process regression , 2007, ICML '07.
[31] Aaron Klein,et al. Tabular Benchmarks for Joint Architecture and Hyperparameter Optimization , 2019, ArXiv.
[32] Ameet Talwalkar,et al. Hyperband: Bandit-Based Configuration Evaluation for Hyperparameter Optimization , 2016, ICLR.
[33] Kilian Q. Weinberger,et al. Densely Connected Convolutional Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[34] Pierre Baldi,et al. Learning Activation Functions to Improve Deep Neural Networks , 2014, ICLR.
[35] Jascha Sohl-Dickstein,et al. Using a thousand optimization tasks to learn hyperparameter search strategies , 2020, ArXiv.
[36] Yoshua Bengio,et al. Algorithms for Hyper-Parameter Optimization , 2011, NIPS.
[37] Lars Schmidt-Thieme,et al. Scalable Gaussian process-based transfer surrogates for hyperparameter optimization , 2017, Machine Learning.