Why Is Auto-Encoding Difficult for Genetic Programming?

Unsupervised learning is an important component in many recent successes in machine learning. The autoencoder neural network is one of the most prominent approaches to unsupervised learning. Here, we use the genetic programming paradigm to create autoencoders and find that the task is difficult for genetic programming, even on small datasets which are easy for neural networks. We investigate which aspects of the autoencoding task are difficult for genetic programming.

[1]  Risto Miikkulainen,et al.  Evolving Neural Networks through Augmenting Topologies , 2002, Evolutionary Computation.

[2]  Miguel Nicolau,et al.  Late-acceptance and step-counting hill-climbing GP for anomaly detection , 2017, GECCO.

[3]  Yoshua Bengio,et al.  Generative Adversarial Nets , 2014, NIPS.

[4]  Edmund K. Burke,et al.  The late acceptance Hill-Climbing heuristic , 2017, Eur. J. Oper. Res..

[5]  Quoc V. Le,et al.  Neural Architecture Search with Reinforcement Learning , 2016, ICLR.

[6]  Kenneth A. De Jong,et al.  Cooperative Coevolution: An Architecture for Evolving Coadapted Subcomponents , 2000, Evolutionary Computation.

[7]  Jason H. Moore,et al.  Where are we now?: a large benchmark study of recent symbolic regression methods , 2018, GECCO.

[8]  Geoffrey E. Hinton,et al.  Reducing the Dimensionality of Data with Neural Networks , 2006, Science.

[9]  Markus Brameier,et al.  On linear genetic programming , 2005 .

[10]  Dimitris N. Metaxas,et al.  StackGAN: Text to Photo-Realistic Image Synthesis with Stacked Generative Adversarial Networks , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).

[11]  John R. Koza,et al.  Genetic programming - on the programming of computers by means of natural selection , 1993, Complex adaptive systems.

[12]  Julian Francis Miller,et al.  Cartesian genetic programming , 2000, GECCO '10.

[13]  Miguel Nicolau,et al.  Learning Neural Representations for Network Anomaly Detection , 2019, IEEE Transactions on Cybernetics.

[14]  Una-May O'Reilly,et al.  Program Search with a Hierarchical Variable Lenght Representation: Genetic Programming, Simulated Annealing and Hill Climbing , 1994, PPSN.

[15]  Yee Whye Teh,et al.  A Fast Learning Algorithm for Deep Belief Nets , 2006, Neural Computation.

[16]  Trent McConaghy,et al.  FFX: Fast, Scalable, Deterministic Symbolic Regression Technology , 2011 .

[17]  Peter Rockett,et al.  The Use of an Analytic Quotient Operator in Genetic Programming , 2013, IEEE Transactions on Evolutionary Computation.

[18]  Kenneth O. Stanley,et al.  Deep Neuroevolution: Genetic Algorithms Are a Competitive Alternative for Training Deep Neural Networks for Reinforcement Learning , 2017, ArXiv.

[19]  Xin Yao,et al.  A review of evolutionary artificial neural networks , 1993, Int. J. Intell. Syst..

[20]  Sanja Petrovic,et al.  A Step Counting Hill Climbing Algorithm applied to University Examination Timetabling , 2016, J. Sched..

[21]  Leonardo Vanneschi,et al.  Local Search is Underused in Genetic Programming , 2016, GPTP.

[22]  Yoshua Bengio,et al.  Extracting and composing robust features with denoising autoencoders , 2008, ICML '08.

[23]  Armando Solar-Lezama,et al.  Unsupervised Learning by Program Synthesis , 2015, NIPS.

[24]  David Jackson,et al.  A New, Node-Focused Model for Genetic Programming , 2012, EuroGP.

[25]  Geoffrey E. Hinton,et al.  Learning internal representations by error propagation , 1986 .

[26]  Alexandros Agapitos,et al.  On the effect of function set to the generalisation of symbolic regression models , 2018, GECCO.

[27]  Hod Lipson,et al.  Distilling Free-Form Natural Laws from Experimental Data , 2009, Science.