Convergence of a self-organizing stochastic neural network

In this paper, we focus on the convergence of a stochastic neural process. In this process, a ''physiologically plausible'' Hebb's learning rule gives rise to a self-organization phenomenon. Some preliminary results concern the asymptotic behaviour of the nework given that the update of neurons is either sequential, partially paralle, or massively parallel. We shall pay attention to the fact that Hebbian learning is closely linked to the underlying dynamics of the network. Thereafter, we shall give, within the mathematical framework of stochastic approximation, some conditions for convergence of the learning scheme.