A study on Ensemble Learning for Time Series Forecasting and the need for Meta-Learning

The contribution of this work is twofold: (1) We introduce a collection of ensemble methods for time series forecasting to combine predictions from base models. We demonstrate insights on the power of ensemble learning for forecasting, showing experiment results on about 16000 openly available datasets, from M4, M5, M3 competitions, as well as FRED (Federal Reserve Economic Data) datasets. Whereas experiments show that ensembles provide a benefit on forecasting results, there is no clear winning ensemble strategy (plus hyperparameter configuration). Thus, in addition, (2), we propose a meta-learning step to choose, for each dataset, the most appropriate ensemble method and their hyperparameter configuration to run based on dataset meta-features.

[1]  Bogdan Gabrys,et al.  Meta-learning for time series forecasting and forecast combination , 2010, Neurocomputing.

[2]  Alexandros Kalousis,et al.  NOEMON: Design, implementation and performance results of an intelligent assistant for classifier selection , 1999, Intell. Data Anal..

[3]  C. Granger,et al.  Handbook of Economic Forecasting , 2006 .

[4]  F. Király,et al.  Forecasting with sktime: Designing sktime's New Forecasting API and Applying It to Replicate and Extend the M4 Study , 2020, ArXiv.

[5]  Rich Caruana,et al.  Ensemble selection from libraries of models , 2004, ICML.

[6]  Fotios Petropoulos,et al.  forecast: Forecasting functions for time series and linear models , 2018 .

[7]  Luís Torgo,et al.  Dynamic and Heterogeneous Ensembles for Time Series Forecasting , 2017, 2017 IEEE International Conference on Data Science and Advanced Analytics (DSAA).

[8]  Thomas G. Dietterich What is machine learning? , 2020, Archives of Disease in Childhood.

[9]  George Athanasopoulos,et al.  Forecasting: principles and practice , 2013 .

[10]  M. J. van der Laan,et al.  Statistical Applications in Genetics and Molecular Biology Super Learner , 2010 .

[11]  Janez Demsar,et al.  Statistical Comparisons of Classifiers over Multiple Data Sets , 2006, J. Mach. Learn. Res..

[12]  Saso Dzeroski,et al.  MetaBags: Bagged Meta-Decision Trees for Regression , 2018, ECML/PKDD.

[13]  Fotios Petropoulos,et al.  Forecasting in social settings: The state of the art , 2020, International Journal of Forecasting.

[14]  A. Timmermann Forecast Combinations , 2005 .

[15]  Spyros Makridakis,et al.  M5 accuracy competition: Results, findings, and conclusions , 2022, International Journal of Forecasting.

[16]  Rob J. Hyndman,et al.  FFORMA: Feature-based forecast model averaging , 2020, International Journal of Forecasting.

[17]  Hugo Jair Escalante,et al.  Models of performance of time series forecasters , 2013, Neurocomputing.

[18]  Teresa Bernarda Ludermir,et al.  Meta-learning approaches to selecting time series models , 2004, Neurocomputing.

[19]  Edesio Alcobaça,et al.  MFE: Towards reproducible meta-feature extraction , 2020, J. Mach. Learn. Res..

[20]  Evangelos Spiliotis,et al.  The M4 Competition: Results, findings, conclusion and way forward , 2018, International Journal of Forecasting.

[21]  Rob J Hyndman,et al.  Forecasting Time Series With Complex Seasonal Patterns Using Exponential Smoothing , 2011 .

[22]  Rob J. Hyndman,et al.  Meta‐learning how to forecast time series , 2023, Journal of Forecasting.

[23]  Agata Chorowska,et al.  Weighted ensemble of statistical models , 2018, 1811.07761.

[24]  Spyros Makridakis,et al.  The M3-Competition: results, conclusions and implications , 2000 .

[25]  K. Nikolopoulos,et al.  The theta model: a decomposition approach to forecasting , 2000 .

[26]  Randal S. Olson,et al.  TPOT: A Tree-based Pipeline Optimization Tool for Automating Machine Learning , 2016, AutoML@ICML.