On a novel unsupervised competitive learning algorithm for scalar quantization

This letter presents a novel unsupervised competitive learning rule called the boundary adaptation rule (BAR), for scalar quantization. It is shown both mathematically and by simulations that BAR converges to equiprobable quantizations of univariate probability density functions and that, in this way, it outperforms other unsupervised competitive learning rules.

[1]  Stanley C. Ahalt,et al.  Competitive learning algorithms for vector quantization , 1990, Neural Networks.

[2]  H.-N. Tan An O(n) winner-take-all circuit using successive elimination , 1991, 1991., IEEE International Sympoisum on Circuits and Systems.

[3]  Robert Hecht-Nielsen,et al.  Applications of counterpropagation networks , 1988, Neural Networks.

[4]  Duane DeSieno,et al.  Adding a conscience to competitive learning , 1988, IEEE 1988 International Conference on Neural Networks.

[5]  D. E. Van den Bout,et al.  TInMANN: the integer Markovian artificial neural network , 1989, International 1989 Joint Conference on Neural Networks.

[6]  Dominique Martinez,et al.  On an Unsupervised Learning Rule for Scalar Quantization following the Maximum Entropy Principle , 1993, Neural Computation.

[7]  David Zipser,et al.  Feature Discovery by Competive Learning , 1986, Cogn. Sci..