暂无分享,去创建一个
Josif Grabocka | Marius Lindauer | Frank Hutter | Arlind Kadra | F. Hutter | M. Lindauer | Josif Grabocka | Arlind Kadra
[1] Hongyi Zhang,et al. mixup: Beyond Empirical Risk Minimization , 2017, ICLR.
[2] Sergei Popov,et al. Neural Oblivious Decision Ensembles for Deep Learning on Tabular Data , 2019, ICLR.
[3] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[4] Natalia Gimelshein,et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library , 2019, NeurIPS.
[5] Kilian Q. Weinberger,et al. Densely Connected Convolutional Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[6] Taesup Kim,et al. Fast AutoAugment , 2019, NeurIPS.
[7] Sergey Ioffe,et al. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning , 2016, AAAI.
[8] Xaq Pitkow,et al. Skip Connections Eliminate Singularities , 2017, ICLR.
[9] Joachim M. Buhmann,et al. The Balanced Accuracy and Its Posterior Distribution , 2010, 2010 20th International Conference on Pattern Recognition.
[10] Ameet Talwalkar,et al. Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization , 2016, J. Mach. Learn. Res..
[11] J. Vanschoren. Meta-Learning , 2018, Automated Machine Learning.
[12] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[13] Aaron Klein,et al. Towards Automatically-Tuned Deep Neural Networks , 2019, Automated Machine Learning.
[14] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[15] Tianqi Chen,et al. XGBoost: A Scalable Tree Boosting System , 2016, KDD.
[16] Gaël Varoquaux,et al. Scikit-learn: Machine Learning in Python , 2011, J. Mach. Learn. Res..
[17] Aaron Klein,et al. Efficient and Robust Automated Machine Learning , 2015, NIPS.
[18] Frank Hutter,et al. Decoupled Weight Decay Regularization , 2017, ICLR.
[19] Andrew Gordon Wilson,et al. Averaging Weights Leads to Wider Optima and Better Generalization , 2018, UAI.
[20] Sercan O. Arik,et al. TabNet: Attentive Interpretable Tabular Learning , 2019, AAAI.
[21] Bernd Bischl,et al. An Open Source AutoML Benchmark , 2019, ArXiv.
[22] Ariel D. Procaccia,et al. Variational Dropout and the Local Reparameterization Trick , 2015, NIPS.
[23] Kilian Q. Weinberger,et al. Snapshot Ensembles: Train 1, get M for free , 2017, ICLR.
[24] Jonas Mockus,et al. Application of Bayesian approach to numerical methods of global and stochastic optimization , 1994, J. Glob. Optim..
[25] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[26] Geoffrey E. Hinton,et al. Lookahead Optimizer: k steps forward, 1 step back , 2019, NeurIPS.
[27] Frank Hutter,et al. SGDR: Stochastic Gradient Descent with Warm Restarts , 2016, ICLR.
[28] P. Alam. ‘L’ , 2021, Composites Engineering: An A–Z Guide.
[29] Sanjeev Arora,et al. Implicit Regularization in Deep Matrix Factorization , 2019, NeurIPS.
[30] Xavier Gastaldi,et al. Shake-Shake regularization of 3-branch residual networks , 2017, ICLR.
[31] Mario Bertero,et al. The Stability of Inverse Problems , 1980 .
[32] Graham W. Taylor,et al. Improved Regularization of Convolutional Neural Networks with Cutout , 2017, ArXiv.
[33] Janez Demsar,et al. Statistical Comparisons of Classifiers over Multiple Data Sets , 2006, J. Mach. Learn. Res..
[34] Hang Zhang,et al. AutoGluon-Tabular: Robust and Accurate AutoML for Structured Data , 2020, ArXiv.
[35] Y. Yao,et al. On Early Stopping in Gradient Descent Learning , 2007 .
[36] Seong Joon Oh,et al. CutMix: Regularization Strategy to Train Strong Classifiers With Localizable Features , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[37] Gavin Brown,et al. Ensemble Learning , 2010, Encyclopedia of Machine Learning and Data Mining.
[38] Aaron Klein,et al. BOHB: Robust and Efficient Hyperparameter Optimization at Scale , 2018, ICML.
[39] P. Alam. ‘T’ , 2021, Composites Engineering: An A–Z Guide.
[40] R. Tibshirani. Regression Shrinkage and Selection via the Lasso , 1996 .
[41] Marius Lindauer,et al. Auto-PyTorch Tabular: Multi-Fidelity MetaLearning for Efficient and Robust AutoDL , 2020, ArXiv.
[42] Kyunghyun Cho,et al. Mixout: Effective Regularization to Finetune Large-scale Pretrained Language Models , 2020, ICLR.
[43] Quoc V. Le,et al. AutoAugment: Learning Augmentation Strategies From Data , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[44] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[45] Eran Segal,et al. Regularization Learning Networks , 2018, NeurIPS.
[46] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[47] Editors , 2003 .
[48] Luís Torgo,et al. OpenML: networked science in machine learning , 2014, SKDD.
[49] Ran El-Yaniv,et al. Net-DNF: Effective Deep Modeling of Tabular Data , 2021, ICLR.