Image compression using HLVQ neural network
暂无分享,去创建一个
We apply a new neural network: HLVQ combining supervised and unsupervised learning to vector quantization. A supervised learning based on learning vector quantization 2 performs attention focusing over a background of a self-organizing feature map algorithm. It exhibits the salient features of both algorithms: the topology-preserving mapping characteristic is acquired through unsupervised learning while supervised learning keeps the overlap between classes to a minimum. Pattern labeling is carried out by a separate unsupervised network taking as input the discrete cosine transform of a pattern. First the labelling network is trained on the transform of sub-images. Each neuron of this network is considered as the prototype of one class. Once convergence is achieved, HLVQ is trained. Each sub-image is input to the network. The class of the input pattern is determined by the most activated neuron of the labelling network on presentation of the sub-image transform.