Neuroevolution Strategy for Time Series Prediction

Optimization is a concept, a process, and a method that all people use on a daily basis to solve their problems. The source of many optimization methods for many scientists has been the nature itself and the mechanisms that exist in it. Neural networks, inspired by the neurons of the human brain, have gained a great deal of recognition in recent years and provide solutions to everyday problems. Evolutionary algorithms are known for their efficiency and speed, in problems where the optimal solution is found in a huge number of possible solutions and they are also known for their simplicity, because their implementation does not require the use of complex mathematics. The combination of these two techniques is called neuroevolution. The purpose of the research is to combine and improve existing neuroevolution architectures, to solve time series problems. In this research, we propose a new improved strategy for such a system. As well as comparing the performance of our system with an already existing system, competing with it on five different datasets. Based on the final results and a combination of statistical results, we conclude that our system manages to perform much better than the existing system in all five datasets.

[1]  Guigang Zhang,et al.  Deep Learning , 2016, Int. J. Semantic Comput..

[2]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[3]  Lukás Burget,et al.  Recurrent neural network based language model , 2010, INTERSPEECH.

[4]  Li Li,et al.  A Comparison of Performance of GA, PSO and Differential Evolution Algorithms for Dynamic Phase Reconfiguration Technology of a Smart Grid , 2019, 2019 IEEE Congress on Evolutionary Computation (CEC).

[5]  Albert Sesé,et al.  Designing an artificial neural network for forecasting tourism time series , 2006 .

[6]  Sergey Rodzin,et al.  Neuroevolution: Problems, algorithms, and experiments , 2016, 2016 IEEE 10th International Conference on Application of Information and Communication Technologies (AICT).

[7]  M. W Gardner,et al.  Artificial neural networks (the multilayer perceptron)—a review of applications in the atmospheric sciences , 1998 .

[8]  Rohitash Chandra,et al.  Information Collection Strategies In Memetic Cooperative Neuroevolution For Time Series Prediction , 2018, 2018 International Joint Conference on Neural Networks (IJCNN).

[9]  Yuanzhi Li,et al.  Convergence Analysis of Two-layer Neural Networks with ReLU Activation , 2017, NIPS.

[10]  Rohitash Chandra,et al.  Memetic Cooperative Neuro-Evolution for Chaotic Time Series Prediction , 2016, ICONIP.

[11]  L. Glass,et al.  Oscillation and chaos in physiological control systems. , 1977, Science.

[12]  Kenneth O. Stanley,et al.  Deep Neuroevolution: Genetic Algorithms Are a Competitive Alternative for Training Deep Neural Networks for Reinforcement Learning , 2017, ArXiv.

[13]  Robert Hecht-Nielsen,et al.  Theory of the backpropagation neural network , 1989, International 1989 Joint Conference on Neural Networks.

[14]  Ajith Abraham,et al.  Hybrid Evolutionary Algorithms: Methodologies, Architectures, and Reviews , 2007 .

[15]  Ricardo Fraiman,et al.  An anova test for functional data , 2004, Comput. Stat. Data Anal..

[16]  A. V. Olgac,et al.  Performance Analysis of Various Activation Functions in Generalized MLP Architectures of Neural Networks , 2011 .

[17]  Heinz Mühlenbein,et al.  Evolution algorithms in combinatorial optimization , 1988, Parallel Comput..

[18]  Risto Miikkulainen,et al.  Designing neural networks through neuroevolution , 2019, Nat. Mach. Intell..

[19]  Jürgen Schmidhuber,et al.  Learning to Forget: Continual Prediction with LSTM , 2000, Neural Computation.

[20]  Kevin Warwick,et al.  Synapsing Variable-Length Crossover: Meaningful Crossover for Variable-Length Genomes , 2007, IEEE Transactions on Evolutionary Computation.