Extended Nonlinear Hebbian Learning for Developing Sparse-Distributed Representation

Recently, Hebbian learning has been extended to nonlinear units with a number of interesting properties and potential applications, e.g., blind signal separation. However, when generalizing these nonlinear Hebbian learning algorithms to a network with multiple units, all the existing methods assume orthonormality constraints, which is too strict in many occasions. In this paper, we propose two alternative approaches to generalize nonlinear Hebbian learning to a network with M neurons, based on the mixture-of-experts paradigm. Preliminary simulation shows interesting results.