Rethinking Default Values: a Low Cost and Efficient Strategy to Define Hyperparameters

Machine Learning (ML) algorithms have been successfully employed by a vast range of practitioners with different backgrounds. One of the reasons for ML popularity is the capability to consistently delivers accurate results, which can be further boosted by adjusting hyperparameters (HP). However, part of practitioners has limited knowledge about the algorithms and does not take advantage of suitable HP settings. In general, HP values are defined by trial and error, tuning, or by using default values. Trial and error is very subjective, time costly and dependent on the user experience. Tuning techniques search for HP values able to maximize the predictive performance of induced models for a given dataset, but with the drawback of a high computational cost and target specificity. To avoid tuning costs, practitioners use default values suggested by the algorithm developer or by tools implementing the algorithm. Although default values usually result in models with acceptable predictive performance, different implementations of the same algorithm can suggest distinct default values. To maintain a balance between tuning and using default values, we propose a strategy to generate new optimized default values. Our approach is grounded on a small set of optimized values able to obtain predictive performance values better than default settings provided by popular tools. The HP candidates are estimated through a pool of promising values tuned from a small and informative set of datasets. After performing a large experiment and a careful analysis of the results, we concluded that our approach delivers better default values. Besides, it leads to competitive solutions when compared with the use of tuned values, being easier to use and having a lower cost.Based on our results, we also extracted simple rules to guide practitioners in deciding whether using our new methodology or a tuning approach.

[1]  Iñaki Inza,et al.  Dealing with the evaluation of supervised classification algorithms , 2015, Artificial Intelligence Review.

[2]  Luigi Fortuna,et al.  Evolutionary Optimization Algorithms , 2001 .

[3]  Jasper Snoek,et al.  Practical Bayesian Optimization of Machine Learning Algorithms , 2012, NIPS.

[4]  Edesio Alcobaça,et al.  MFE: Towards reproducible meta-feature extraction , 2020, J. Mach. Learn. Res..

[5]  Yoshua Bengio,et al.  Algorithms for Hyper-Parameter Optimization , 2011, NIPS.

[6]  Bernd Bischl,et al.  Meta learning for defaults: symbolic defaults , 2018, ICONIP 2018.

[7]  Chih-Jen Lin,et al.  A Practical Guide to Support Vector Classication , 2008 .

[8]  Ameet Talwalkar,et al.  Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization , 2016, J. Mach. Learn. Res..

[9]  Yoshua Bengio,et al.  Random Search for Hyper-Parameter Optimization , 2012, J. Mach. Learn. Res..

[10]  Luís Torgo,et al.  OpenML: networked science in machine learning , 2014, SKDD.

[11]  Janez Demsar,et al.  Statistical Comparisons of Classifiers over Multiple Data Sets , 2006, J. Mach. Learn. Res..

[12]  Martin Pelikan,et al.  An introduction and survey of estimation of distribution algorithms , 2011, Swarm Evol. Comput..

[13]  Michèle Sebag,et al.  Collaborative hyperparameter tuning , 2013, ICML.

[14]  Bogdan Gabrys,et al.  Metalearning: a survey of trends and technologies , 2013, Artificial Intelligence Review.

[15]  Joaquin Vanschoren,et al.  Importance of Tuning Hyperparameters of Machine Learning Algorithms , 2020, ArXiv.

[16]  Sigrún Andradóttir,et al.  A Review of Random Search Methods , 2015 .

[17]  André Carlos Ponce de Leon Ferreira de Carvalho,et al.  Combining meta-learning and search techniques to select parameters for support vector machines , 2012, Neurocomputing.

[18]  M. C. Monard,et al.  A Note on Parameter Selection for Support Vector Machines , 2013, MICAI.

[19]  Gaël Varoquaux,et al.  Scikit-learn: Machine Learning in Python , 2011, J. Mach. Learn. Res..

[20]  Leo Breiman,et al.  Random Forests , 2001, Machine Learning.

[21]  Oliviero Carugo,et al.  Data Mining Techniques for the Life Sciences , 2009, Methods in Molecular Biology.

[22]  Thomas Stützle,et al.  AClib: A Benchmark Library for Algorithm Configuration , 2014, LION.

[23]  Bernd Bischl,et al.  Effectiveness of Random Search in SVM hyper-parameter tuning , 2015, 2015 International Joint Conference on Neural Networks (IJCNN).

[24]  Martín Carpio,et al.  Hyper-Parameter Tuning for Support Vector Machines by Estimation of Distribution Algorithms , 2017, Nature-Inspired Design of Hybrid Intelligent Systems.

[25]  Bernd Bischl,et al.  Learning multiple defaults for machine learning algorithms , 2018, GECCO Companion.

[26]  Fernando Nogueira,et al.  Imbalanced-learn: A Python Toolbox to Tackle the Curse of Imbalanced Datasets in Machine Learning , 2016, J. Mach. Learn. Res..

[27]  Kevin Leyton-Brown,et al.  Sequential Model-Based Optimization for General Algorithm Configuration , 2011, LION.

[28]  Yuri Malitsky,et al.  Model-Based Genetic Algorithms for Algorithm Configuration , 2015, IJCAI.

[29]  Bernd Bischl,et al.  mlr: Machine Learning in R , 2016, J. Mach. Learn. Res..

[30]  Bernd Bischl,et al.  Automatic model selection for high-dimensional survival analysis , 2015 .

[31]  Chih-Jen Lin,et al.  LIBSVM: A library for support vector machines , 2011, TIST.

[32]  Andreas Dengel,et al.  Automatic classifier selection for non-experts , 2012, Pattern Analysis and Applications.

[33]  David D. Cox,et al.  Making a Science of Model Search: Hyperparameter Optimization in Hundreds of Dimensions for Vision Architectures , 2013, ICML.

[34]  Christophe G. Giraud-Carrier,et al.  Using Metalearning to Predict When Parameter Optimization Is Likely to Improve Classification Accuracy , 2014, MetaSel@ECAI.

[35]  André Carlos Ponce de Leon Ferreira de Carvalho,et al.  A meta-learning recommender system for hyperparameter tuning: predicting when tuning improves SVM classifiers , 2019, Inf. Sci..

[36]  Nando de Freitas,et al.  A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning , 2010, ArXiv.

[37]  R. Eberhart,et al.  Empirical study of particle swarm optimization , 1999, Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406).

[38]  Ricardo B. C. Prudêncio,et al.  Fine-tuning of support vector machine parameters using racing algorithms , 2014, ESANN.

[39]  Holger Hoos,et al.  Exploitation of Default Parameter Values in Automated Algorithm Configuration , 2019 .

[40]  Bernd Bischl,et al.  Tunability: Importance of Hyperparameters of Machine Learning Algorithms , 2018, J. Mach. Learn. Res..

[41]  竹内 一郎,et al.  Leave-One-Out Cross-Validation , 2014, Encyclopedia of Machine Learning and Data Mining.

[42]  Ljubomir J. Buturovic,et al.  Cross-validation pitfalls when selecting and assessing regression and classification models , 2014, Journal of Cheminformatics.

[43]  Kevin Leyton-Brown,et al.  Efficient benchmarking of algorithm configurators via model-based surrogates , 2017, Machine Learning.

[44]  Jason Weston,et al.  A user's guide to support vector machines. , 2010, Methods in molecular biology.

[45]  André Carlos Ponce de Leon Ferreira de Carvalho,et al.  Meta-learning Recommendation of Default Hyper-parameter Values for SVMs in Classification Tasks , 2015, MetaSel@PKDD/ECML.

[46]  Andreas Dengel,et al.  Meta-learning for evolutionary parameter optimization of classifiers , 2012, Machine Learning.

[47]  Thomas Stützle,et al.  F-Race and Iterated F-Race: An Overview , 2010, Experimental Methods for the Analysis of Optimization Algorithms.

[48]  Gavin C. Cawley,et al.  On Over-fitting in Model Selection and Subsequent Selection Bias in Performance Evaluation , 2010, J. Mach. Learn. Res..

[49]  Joachim M. Buhmann,et al.  The Balanced Accuracy and Its Posterior Distribution , 2010, 2010 20th International Conference on Pattern Recognition.

[50]  Lars Schmidt-Thieme,et al.  Sequential Model-Free Hyperparameter Tuning , 2015, 2015 IEEE International Conference on Data Mining.

[51]  下田 吉之,et al.  PSO(Particle Swarm Optimization)手法による最適熱源探索 , 2012 .

[52]  Rafael Gomes Mantovani,et al.  Use of meta-learning for hyperparameter tuning of classification problems , 2018 .