Input partitioning to mixture of experts

Given a supervised learning context, the mixture of experts approach uses several neural networks in parallel to provide a modular solution to the overall problem.. Under the mixtures of experts architecture a method for 'designing' the number of experts and assigning local 'regions' of the input space to individual experts is investigated. Classification performance and transparency of the scheme is found to be significantly better than that using a standard mixtures of experts.