Sparse representation from a winner-take-all neural network

We introduce an incremental algorithm for independent component analysis (ICA) based on maximization of sparseness criteria. We propose using a new sparseness measure criteria function. The learning algorithm based on this criteria leads to a winner-take-all learning mechanism. It avoids the optimization of high order nonlinear function or density estimation, which have been used by other ICA methods. We show that when the latent independent random variables are super-Gaussian distributions, the network efficiently extracts the independent components.