Self-Consistent Neural Receptive Fields

The problem of an optimal determination of neural receptive fields is addressed. A principle of self-consistency is formulated which is based on the maximization of receptive field overlapping with the reference data. An iteration formula is derived which yields for the receptive field width approximately the distance to the center of nearest neighbor. Numerical examples show that the probabilistic neural network with self-consistent receptive fields yields better estimataion of continuous probability density than Parzen's estimator on normal, multimodal and exponential distributions. A generalization to multivariate case yields a simple method for the determimnation of ellipsoidal basis function neural network.