Exploring Overfitting in Genetic Programming

The problem of overfitting (focusing closely on examples at the loss of generalization power) is encountered in all supervised machine learning schemes. This study is dedicated to explore some aspects of overfitting in the particular case of genetic programming. After recalling the causes usually invoked to explain overfitting such as hypothesis complexity or noisy learning examples, we test and compare the resistance to overfitting on three variants of genetic programming algorithms (basic GP, sizefair crossover GP and GP with boosting) on two benchmarks, a symbolic regression and a classification problem. We propose guidelines based on these results to help reduce overfitting with genetic programming.