Clustering For Designing Error Correcting Codes

Acknowledgements I had the great pleasure of working with Dr. Anamitra Makur during my stay at the Institute. But for him, this thesis would not have seen light. I would like to thank him for the help he has rendered during this work. Suryan for their help. Jyothish and Jemlin for making my stay at the Institute a happy and memorable one. P r a h h and Mala who had been always helpful to me. Finally I would like to thank my parents for their constant encouragement throughout the course. Abstract In this thesis we address the problem of designing code1 fo; specific applications. To do so we make use oft he relationship bet ween dus ten and codes. Designing a block code over any finite dimensional space may be thought of as forming the corresponding number of elnsters over the par titular dimensional space, In literature we have a numb w of algorithms available for clustering. We have examined the performance of a number of such algorithms, such a Linde BuzbGr ay, Simnlat ed Annealing, Simulated Annealing wi th LindeJIuzb Gray, Det erminstie Annealing, ete, for design of codes, But dl these dgori t hmms make use of the Eucledian squared error distance measure for clustering. This di ~t ant ana measure does not match with the dist ance measure of intere~t in the error correcting scenario, namely, Hamming distance. Consequently we have developed an algorithm that can be used for clustering wi ti Hamming dis t a m as the distance measure, Also, it has been observed that stochastic algorithms, such as Simulated Annealing fail to ploduee optimum codes due to very slow convergence near the end. As a remedy, we havt proposed a modjfieation based on the code structure, for such algorithms f or code design whi ch makes it possible to converge to the optimum eodea

[1]  C. D. Gelatt,et al.  Optimization by Simulated Annealing , 1983, Science.

[2]  Ray Hill,et al.  A First Course in Coding Theory , 1988 .

[3]  Rose,et al.  Statistical mechanics and phase transitions in clustering. , 1990, Physical review letters.

[4]  Robert M. Gray,et al.  Multiple local optima in vector quantizers , 1982, IEEE Trans. Inf. Theory.

[5]  J. Vaisey,et al.  Simulated annealing and codebook design , 1988, ICASSP-88., International Conference on Acoustics, Speech, and Signal Processing.

[6]  Kenneth Rose,et al.  Joint source-channel vector quantization using deterministic annealing , 1992, [Proceedings] ICASSP-92: 1992 IEEE International Conference on Acoustics, Speech, and Signal Processing.

[7]  Donald Geman,et al.  Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images , 1984 .

[8]  Andrew J. Viterbi,et al.  Principles of Digital Communication and Coding , 1979 .

[9]  Simon Haykin,et al.  Digital Communications , 2017 .

[10]  N. Metropolis,et al.  Equation of State Calculations by Fast Computing Machines , 1953, Resonance.

[11]  Allen Gersho,et al.  Vector quantization and signal compression , 1991, The Kluwer international series in engineering and computer science.

[12]  Allen Gersho,et al.  On the structure of vector quantizers , 1982, IEEE Trans. Inf. Theory.

[13]  R. Gray,et al.  Vector quantization , 1984, IEEE ASSP Magazine.

[14]  Philippe Piret,et al.  Convolutional Codes: An Algebraic Approach , 1988 .

[15]  D. Mitra,et al.  Convergence and finite-time behavior of simulated annealing , 1985, 1985 24th IEEE Conference on Decision and Control.

[16]  Geoffrey C. Fox,et al.  A deterministic annealing approach to clustering , 1990, Pattern Recognit. Lett..

[17]  S. P. Lloyd,et al.  Least squares quantization in PCM , 1982, IEEE Trans. Inf. Theory.

[18]  Robert M. Gray,et al.  An Algorithm for Vector Quantizer Design , 1980, IEEE Trans. Commun..