Empirical Evaluation of Bayesian Sampling for Neural Classifiers

Adopting a Bayesian approach and sampling the network parameters from their posterior distribution is a rather novel and promising method for improving the generalisation performance of neural network predictors. The present empirical study applies this scheme to a set of different synthetic and real-world classification problems. The paper focuses on the dependence of the prediction results on the prior distribution of the network parameters and hyperparameters, and provides a critical evaluation of the automatic relevance determination (ARD) scheme for detecting irrelevant inputs.