Diversity-driven wide learning for training distributed classification models

In order to address scalability issues which are a challenge for Deep Learning methods, we propose Wide Learning --- a model that scales horizontally rather than vertically enabling distributed learning. This approach first trains a repertoire of architecturally diverse neural networks of low complexity in parallel. Each network in the repertoire extracts a set of features from the dataset; these are then aggregated in a second short training phase in a centralised model to solve the classification task. The repertoire is generated using a quality diversity evolutionary algorithm (MAP-Elites) which returns a set of neural networks which are diverse w.r.t. feature descriptors partially describing their architecture and optimised w.r.t. their ability to solve the task. The technique is shown to perform well on two benchmark classification problems which have been tackled in the literature with Deep Learning techniques. Additional experiments provide insight into the role that diversity plays in contributing to the performance of the repertoire. We propose that evolving neural networks by promoting architectural diversity could in future lead to better results in some domains where current approaches have fallen short.

[1]  Jean-Baptiste Mouret,et al.  Illuminating search spaces by mapping elites , 2015, ArXiv.

[2]  Thomas G. Dietterich Multiple Classifier Systems , 2000, Lecture Notes in Computer Science.