Parallelization of Image Compression on Distributed Memory Architecture

In this work we propose two parallel algorithms, for image compression, based on multilayer neural networks, by subdividing the image into blocks. The first parallel technique is based on a static distribution of blocks to processors. The advantage of this distribution is that the training phase (construction of the compressor-decompressor network) does not need any communication but its drawback is the load balancing problem. The second parallel technique improves the load balancing problem by using a dynamic distribution of blocks but it requires communication between processors. These two implementations are tested and compared on a distributed memory machine under PVM.