Generalisation is a non-trivial problem in machine learning and more so with neural networks which have the capabilities of inducing varying degrees of freedom. It is influenced by many factors in network design, such as network size, initial conditions, learning rate, weight decay factor, pruning algorithms, and many more. In spite of continuous research efforts, we could not arrive at a practical solution which can offer a superior generalisation. We present a novel approach for handling complex problems of machine learning. A multiobjective genetic algorithm is used for identifying (near-) optimal subspaces for hierarchical learning. This strategy of explicitly partitioning the data for subsequent mapping onto a hierarchical classifier is found both to reduce the learning complexity and the classification time. The classification performance of various algorithms is compared and it is argued that the neural modules are superior for learning the localised decision surfaces of such partitions and offer better generalisation.
[1]
Robert A. Jacobs,et al.
Hierarchical Mixtures of Experts and the EM Algorithm
,
1993,
Neural Computation.
[2]
Russell Reed,et al.
Pruning algorithms-a survey
,
1993,
IEEE Trans. Neural Networks.
[3]
Rajeev Kumar,et al.
ANCHOR - A Connectionist Architecture for Hierarchical Nesting of Multiple Heterogeneous Neural Nets
,
1996
.
[4]
Elie Bienenstock,et al.
Neural Networks and the Bias/Variance Dilemma
,
1992,
Neural Computation.
[5]
Rajeev Kumar,et al.
Multiobjective genetic algorithm partitioning for hierarchical learning of high-dimensional pattern spaces: a learning-follows-decomposition strategy
,
1998,
IEEE Trans. Neural Networks.
[6]
David H. Wolpert,et al.
No free lunch theorems for optimization
,
1997,
IEEE Trans. Evol. Comput..
[7]
Andreas Weigend,et al.
On overfitting and the effective number of hidden units
,
1993
.