Improved Representation-burden Conservation Network for Learning Non-stationary VQ

In a recent publication [1], it was shown that a biologically plausible RCN (Representation-burden Conservation Network) in which conservation is achieved by bounding the summed representation-burden of all neurons at constant 1, is effective in learning stationary vector quantization. Based on the conservation principle, a new approach for designing a dynamic RCN for processing both stationary and non-stationary inputs is introduced in this paper. We show that, in response to the input statistics changes, dynamic RCN improves its original counterpart in incremental learning capability as well as in self-organizing the network structure. Performance comparisons between dynamic RCN and other self-development models are also presented. Simulation results show that dynamic RCN is very effective in training a near-optimal vector quantizer in that it manages to keep a balance between the equiprobable and equidistortion criterion.