Comparative performance analysis of neural networks architectures on H2O platform for various activation functions

Deep learning (deep structured learning, hierarchical learning or deep machine learning) is a branch of machine learning based on a set of algorithms that attempt to model high-level abstractions in data by using multiple processing layers with complex structures or otherwise composed of multiple non-linear transformations. In this paper, we present the results of testing neural networks architectures on H2O platform for various activation functions, stopping metrics, and other parameters of machine learning algorithm. It was demonstrated for the use case of MNIST database of handwritten digits in single-threaded mode that blind selection of these parameters can hugely increase (by 2–3 orders) the runtime without the significant increase of precision. This result can have crucial influence for opitmization of available and new machine learning methods, especially for image recognition problems.

[1]  P ? ? ? ? ? ? ? % ? ? ? ? , 1991 .

[2]  Yuri Gordienko,et al.  Comparative analysis of open source frameworks for machine learning with use case in single-threaded and multi-threaded modes , 2017, 2017 12th International Scientific and Technical Conference on Computer Sciences and Information Technologies (CSIT).

[3]  Yuri Gordienko,et al.  Automatized generation of alphabets of symbols , 2017, 2017 Federated Conference on Computer Science and Information Systems (FedCSIS).

[4]  Yuri Gordienko,et al.  Deep Learning for Fatigue Estimation on the Basis of Multimodal Human-Machine Interactions , 2017, ArXiv.

[5]  Nitish Srivastava,et al.  Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..