How meta-heuristic algorithms contribute to deep learning in the hype of big data analytics

Deep learning (DL) is one of the most emerging types of contemporary machine learning techniques that mimic the cognitive patterns of animal visual cortex to learn the new abstract features automatically by deep and hierarchical layers. DL is believed to be a suitable tool so far for extracting insights from very huge volume of so-called big data. Nevertheless, one of the three “V” or big data is velocity that implies the learning has to be incremental as data are accumulating up rapidly. DL must be fast and accurate. By the technical design of DL, it is extended from feed-forward artificial neural network with many multi-hidden layers of neurons called deep neural network (DNN). In the training process of DNN, it has certain inefficiency due to very long training time required. Obtaining the most accurate DNN within a reasonable run-time is a challenge, given there are potentially many parameters in the DNN model configuration and high dimensionality of the feature space in the training dataset. Meta-heuristic has a history of optimizing machine learning models successfully. How well meta-heuristic could be used to optimize DL in the context of big data analytics is a thematic topic which we pondered on in this paper. As a position paper, we review the recent advances of applying meta-heuristics on DL, discuss about their pros and cons and point out some feasible research directions for bridging the gaps between meta-heuristics and DL.

[1]  Fred Glover,et al.  Tabu Search - Part II , 1989, INFORMS J. Comput..

[2]  Jia Lei,et al.  Deep Learning: Yesterday, Today, and Tomorrow , 2013 .

[3]  Bart De Moor,et al.  Hyperparameter Search in Machine Learning , 2015, ArXiv.

[4]  Simon Fong,et al.  Elephant Search Algorithm for optimization problems , 2015, 2015 Tenth International Conference on Digital Information Management (ICDIM).

[5]  Christian Blum,et al.  Metaheuristics in combinatorial optimization: Overview and conceptual comparison , 2003, CSUR.

[6]  Simon Fong,et al.  A Novel Hybrid Self-Adaptive Bat Algorithm , 2014, TheScientificWorldJournal.

[7]  Simon Fong,et al.  Self-Adaptive Wolf Search Algorithm , 2016, 2016 5th IIAI International Congress on Advanced Applied Informatics (IIAI-AAI).

[8]  David E. Goldberg,et al.  Genetic algorithms and Machine Learning , 1988, Machine Learning.

[9]  Steven R. Young,et al.  Optimizing deep learning hyper-parameters through an evolutionary algorithm , 2015, MLHPC@SC.

[10]  Michael R. Lyu,et al.  A hybrid particle swarm optimization-back-propagation algorithm for feedforward neural network training , 2007, Appl. Math. Comput..

[11]  Gisbert Schneider,et al.  Optimized Particle Swarm Optimization (OPSO) and its application to artificial neural network training , 2006, BMC Bioinformatics.

[12]  Geoffrey E. Hinton Training Products of Experts by Minimizing Contrastive Divergence , 2002, Neural Computation.

[13]  Simon Fong,et al.  Recent advances in metaheuristic algorithms: Does the Makara dragon exist? , 2016, The Journal of Supercomputing.

[14]  Riccardo Poli,et al.  Particle swarm optimization , 1995, Swarm Intelligence.

[15]  Z. Beheshti A review of population-based meta-heuristic algorithm , 2013, SOCO 2013.

[16]  Ali Farhadi,et al.  Deep Classifiers from Image Tags in the Wild , 2015, MMCommons '15.

[17]  Yee Whye Teh,et al.  A Fast Learning Algorithm for Deep Belief Nets , 2006, Neural Computation.

[18]  João Paulo Papa,et al.  Model selection for Discriminative Restricted Boltzmann Machines through meta-heuristic techniques , 2015, J. Comput. Sci..

[19]  Simon Fong,et al.  A heuristic optimization method inspired by wolf preying behavior , 2015, Neural Computing and Applications.

[20]  Wofgang Maas,et al.  Networks of spiking neurons: the third generation of neural network models , 1997 .

[21]  Xin-She Yang,et al.  Engineering Optimization: An Introduction with Metaheuristic Applications , 2010 .

[22]  Mohamad Ivan Fanany,et al.  Simulated Annealing Algorithm for Deep Learning , 2015 .

[23]  Janez Brest,et al.  A Brief Review of Nature-Inspired Algorithms for Optimization , 2013, ArXiv.

[24]  Hak-Keung Lam,et al.  Tuning of the structure and parameters of a neural network using an improved genetic algorithm , 2003, IEEE Trans. Neural Networks.

[25]  S. Fong,et al.  Metaheuristic Algorithms: Optimal Balance of Intensification and Diversification , 2014 .

[26]  Wolfgang Maass,et al.  Networks of Spiking Neurons: The Third Generation of Neural Network Models , 1996, Electron. Colloquium Comput. Complex..

[27]  Simon Fong,et al.  GPU-enabled back-propagation artificial neural network for digit recognition in parallel , 2016, The Journal of Supercomputing.

[28]  Fred W. Glover,et al.  Tabu Search - Part I , 1989, INFORMS J. Comput..

[29]  Chia-Feng Juang,et al.  A hybrid of genetic algorithm and particle swarm optimization for recurrent network design , 2004, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).