Gradient sparse optimization via competitive learning

In this paper, we propose a new method to achieve sparseness via a competitive learning principle for the linear kernel regression and classification task. We form the duality of the LASSO criteria, and transfer an /spl lscr/ /sub 1/ norm minimization to an /spl lscr//sub /spl infin// norm maximization problem. We introduce a novel solution derived from gradient descending, which links the sparse representation and the competitive learning scheme. This framework is applicable to a variety of problems, such as regression, classification, feature selection, and data clustering.