Learning by Back-Propagating Output Correlation in Winner-takes-all and Auto-associative Networks

This paper presents a correlation penalty term in the error function of back-propagation(BP) training algorithm. During the course of training, along with the back-propagation term an additional term is sent back to the weight update equation. Along with minimization of sum squared error function, correlation of output node are also minimized( or maximized) by the action of output correlation penalty term. One aim of the correlation back-propagation is to investigate the representation learned by penalty function for extracting important aspects about the input domain. The algorithm is applied to classification task which includes diabetes and glass identification problem. A preliminary experiment is performed with two images to investigate its training in the auto-associative network using the proposed accumulated update rules.