Sparse Regression Codes

Developing computationally-efficient codes that approach the Shannon-theoretic limits for communication and compression has long been one of the major goals of information and coding theory. There have been significant advances towards this goal in the last couple of decades, with the emergence of turbo codes, sparse-graph codes, and polar codes. These codes are designed primarily for discrete-alphabet channels and sources. For Gaussian channels and sources, where the alphabet is inherently continuous, Sparse Superposition Codes or Sparse Regression Codes (SPARCs) are a promising class of codes for achieving the Shannon limits. This survey provides a unified and comprehensive overview of sparse regression codes, covering theory, algorithms, and practical implementation aspects. The first part of the monograph focuses on SPARCs for AWGN channel coding, and the second part on SPARCs for lossy compression (with squared error distortion criterion). In the third part, SPARCs are used to construct codes for Gaussian multi-terminal channel and source coding models such as broadcast channels, multiple-access channels, and source and channel coding with side information. The survey concludes with a discussion of open problems and directions for future work.

[1]  Kannan Ramchandran,et al.  Distributed source coding using syndromes (DISCUS): design and construction , 2003, IEEE Trans. Inf. Theory.

[2]  Zixiang Xiong,et al.  Wyner-Ziv coding based on TCQ and LDPC codes , 2005, IEEE Transactions on Communications.

[3]  Sanghee Cho,et al.  High-dimensional regression with random design, including sparse superposition codes , 2014 .

[4]  Stephan ten Brink,et al.  A close-to-capacity dirty paper coding scheme , 2004, IEEE Transactions on Information Theory.

[5]  Aaron D. Wyner,et al.  The rate-distortion function for source coding with side information at the decoder , 1976, IEEE Trans. Inf. Theory.

[6]  Sergio Verdú,et al.  Fixed-Length Lossy Compression in the Finite Blocklength Regime , 2011, IEEE Transactions on Information Theory.

[7]  Thomas M. Cover,et al.  Broadcast channels , 1972, IEEE Trans. Inf. Theory.

[8]  Nicolas Macris,et al.  Proof of threshold saturation for spatially coupled sparse superposition codes , 2016, 2016 IEEE International Symposium on Information Theory (ISIT).

[9]  Ramji Venkataramanan,et al.  Spatially Coupled Sparse Regression Codes: Design and State Evolution Analysis , 2018, 2018 IEEE International Symposium on Information Theory (ISIT).

[10]  Alan M. Frieze,et al.  Random graphs , 2006, SODA '06.

[11]  Sekhar Tatikonda,et al.  The Rate-Distortion Function and Error Exponent of Sparse Regression Codes with Optimal Encoding , 2014 .

[12]  Nicolas Macris,et al.  Threshold saturation of spatially coupled sparse superposition codes for all memoryless channels , 2016, 2016 IEEE Information Theory Workshop (ITW).

[13]  Ling Liu,et al.  Construction of Capacity-Achieving Lattice Codes: Polar Lattices , 2014, IEEE Transactions on Communications.

[14]  Béla Bollobás,et al.  Random Graphs , 1985 .

[15]  RushCynthia,et al.  Capacity-Achieving Sparse Superposition Codes via Approximate Message Passing Decoding , 2017 .

[16]  Nicolas Macris,et al.  Approaching the Rate-Distortion Limit With Spatial Coupling, Belief Propagation, and Decimation , 2015, IEEE Transactions on Information Theory.

[17]  Andrea Montanari,et al.  Message-passing algorithms for compressed sensing , 2009, Proceedings of the National Academy of Sciences.

[18]  Thomas M. Cover,et al.  Network Information Theory , 2001 .

[19]  Bernd Girod,et al.  Distributed Video Coding , 2005, Proceedings of the IEEE.

[20]  Florent Krzakala,et al.  Probabilistic reconstruction in compressed sensing: algorithms, phase diagrams, and threshold achieving matrices , 2012, ArXiv.

[21]  David J. Sakrison,et al.  A geometric treatment of the source encoding of a Gaussian random variable , 1968, IEEE Trans. Inf. Theory.

[22]  L. Litwin,et al.  Error control coding , 2001 .

[23]  R. Gallager Information Theory and Reliable Communication , 1968 .

[24]  Shlomo Shamai,et al.  Nested linear/Lattice codes for structured multiterminal binning , 2002, IEEE Trans. Inf. Theory.

[25]  Patrick Schulte,et al.  Bandwidth Efficient and Rate-Matched Low-Density Parity-Check Coded Modulation , 2015, IEEE Transactions on Communications.

[26]  Ramji Venkataramanan,et al.  Capacity-Achieving Sparse Superposition Codes via Approximate Message Passing Decoding , 2015, IEEE Transactions on Information Theory.

[27]  Martin J. Wainwright,et al.  Lossy Source Compression Using Low-Density Generator Matrix Codes: Analysis and Algorithms , 2010, IEEE Transactions on Information Theory.

[28]  Tsachy Weissman,et al.  Rate-distortion via Markov chain Monte Carlo , 2008, 2008 IEEE International Symposium on Information Theory.

[29]  Trevor Hastie,et al.  Statistical Learning with Sparsity: The Lasso and Generalizations , 2015 .

[30]  Katalin Marton,et al.  Error exponent for source coding with a fidelity criterion , 1974, IEEE Trans. Inf. Theory.

[31]  Sekhar Tatikonda,et al.  Lossy compression via sparse linear regression: Computationally efficient encoding and decoding , 2013, 2013 IEEE International Symposium on Information Theory.

[32]  Andrew R. Barron,et al.  Fast Sparse Superposition Codes Have Near Exponential Error Probability for $R<{\cal C}$ , 2014, IEEE Transactions on Information Theory.

[33]  H. Vincent Poor,et al.  Channel Coding Rate in the Finite Blocklength Regime , 2010, IEEE Transactions on Information Theory.

[34]  Meir Feder,et al.  Low-Density Lattice Codes , 2007, IEEE Transactions on Information Theory.

[35]  Sekhar Tatikonda,et al.  The Rate-Distortion Function and Excess-Distortion Exponent of Sparse Regression Codes With Optimal Encoding , 2017, IEEE Transactions on Information Theory.

[36]  Andrew R. Barron,et al.  Least squares superposition codes of moderate dictionary size, reliable at rates up to capacity , 2010, 2010 IEEE International Symposium on Information Theory.

[37]  Yuval Kochman,et al.  The Dispersion of Lossy Source Coding , 2011, 2011 Data Compression Conference.

[38]  Bixio Rimoldi,et al.  Successive refinement of information: characterization of the achievable rates , 1994, IEEE Trans. Inf. Theory.

[39]  Rüdiger L. Urbanke,et al.  Spatially coupled ensembles universally achieve capacity under belief propagation , 2012, 2012 IEEE International Symposium on Information Theory Proceedings.

[40]  DAVID J. SAKRISON,et al.  The Rate Distortion Function for a Class of Sources , 1969, Inf. Control..

[41]  Erdal Arikan,et al.  Channel Polarization: A Method for Constructing Capacity-Achieving Codes for Symmetric Binary-Input Memoryless Channels , 2008, IEEE Transactions on Information Theory.

[42]  Gregory W. Wornell,et al.  The duality between information embedding and source coding with side information and some applications , 2003, IEEE Trans. Inf. Theory.

[43]  J.A. O'Sullivan,et al.  Information theoretic analysis of steganography , 1998, Proceedings. 1998 IEEE International Symposium on Information Theory (Cat. No.98CH36252).

[44]  Daniel J. Costello,et al.  Channel coding: The road to channel capacity , 2006, Proceedings of the IEEE.

[45]  Andrea Montanari,et al.  The LASSO Risk for Gaussian Matrices , 2010, IEEE Transactions on Information Theory.

[46]  Li Ping,et al.  Clipping Can Improve the Performance of Spatially Coupled Sparse Superposition Codes , 2017, IEEE Communications Letters.

[47]  A. Lapidoth On the role of mismatch in rate distortion theory , 1995, Proceedings of 1995 IEEE International Symposium on Information Theory.

[48]  R. Urbanke,et al.  Polar codes for Slepian-Wolf, Wyner-Ziv, and Gelfand-Pinsker , 2010, 2010 IEEE Information Theory Workshop on Information Theory (ITW 2010, Cairo).

[49]  Michael Lentmaier,et al.  Spatially Coupled LDPC Codes Constructed From Protographs , 2014, IEEE Transactions on Information Theory.

[50]  S. Brink Convergence of iterative decoding , 1999 .

[51]  Nicolas Macris,et al.  Threshold Saturation for Spatially Coupled LDPC and LDGM Codes on BMS Channels , 2013, IEEE Transactions on Information Theory.

[52]  Shunsuke Ihara Error Exponent for Coding of Memoryless Gaussian Sources with a Fidelity Criterion , 2000 .

[53]  Tsachy Weissman,et al.  Rateless lossy compression via the extremes , 2014, 2014 52nd Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[54]  R. Tibshirani Regression Shrinkage and Selection via the Lasso , 1996 .

[55]  Sergio Verdú,et al.  Nonlinear Sparse-Graph Codes for Lossy Compression , 2009, IEEE Transactions on Information Theory.

[56]  Andrea Goldsmith,et al.  Wireless Communications , 2005, 2021 15th International Conference on Advanced Technologies, Systems and Services in Telecommunications (TELSIKS).

[57]  David J. Sakrison,et al.  The rate of a class of random processes , 1970, IEEE Trans. Inf. Theory.

[58]  G. David Forney,et al.  Modulation and Coding for Linear Gaussian Channels , 1998, IEEE Trans. Inf. Theory.

[59]  Ioannis Kontoyiannis,et al.  Efficient random codebooks and databases for lossy compression in near-linear time , 2009, 2009 IEEE Information Theory Workshop on Networking and Information Theory.

[60]  Amin Coja-Oghlan,et al.  Chasing the K-Colorability Threshold , 2013, 2013 IEEE 54th Annual Symposium on Foundations of Computer Science.

[61]  Ramji Venkataramanan,et al.  The Error Probability of Sparse Superposition Codes With Approximate Message Passing Decoding , 2017, IEEE Transactions on Information Theory.

[62]  Uri Erez,et al.  Achieving 1/2 log (1+SNR) on the AWGN channel with lattice encoding and decoding , 2004, IEEE Transactions on Information Theory.

[63]  Herbert A. David,et al.  Order Statistics , 2011, International Encyclopedia of Statistical Science.

[64]  Sekhar Tatikonda,et al.  Lossy Compression via Sparse Linear Regression: Performance Under Minimum-Distance Encoding , 2014, IEEE Transactions on Information Theory.

[65]  Shu Lin,et al.  Error Control Coding , 2004 .

[66]  Jun'ichi Takeuchi,et al.  An improved upper bound on block error probability of least squares superposition codes with unbiased Bernoulli dictionary , 2016, 2016 IEEE International Symposium on Information Theory (ISIT).

[67]  Michael Lentmaier,et al.  Iterative Decoding Threshold Analysis for LDPC Convolutional Codes , 2010, IEEE Transactions on Information Theory.

[68]  Kannan Ramchandran,et al.  PRISM: A Video Coding Paradigm With Motion Estimation at the Decoder , 2007, IEEE Transactions on Image Processing.

[69]  Florent Krzakala,et al.  Approximate Message-Passing Decoder and Capacity Achieving Sparse Superposition Codes , 2015, IEEE Transactions on Information Theory.

[70]  Rüdiger L. Urbanke,et al.  Threshold Saturation via Spatial Coupling: Why Convolutional LDPC Ensembles Perform So Well over the BEC , 2010, IEEE Transactions on Information Theory.

[71]  Nicolas Macris,et al.  Universal Sparse Superposition Codes With Spatial Coupling and GAMP Decoding , 2017, IEEE Transactions on Information Theory.

[72]  Rüdiger L. Urbanke,et al.  Polar Codes are Optimal for Lossy Source Coding , 2009, IEEE Transactions on Information Theory.

[73]  Thomas M. Cover,et al.  Elements of Information Theory , 2005 .

[74]  Martin Haardt,et al.  An introduction to the multi-user MIMO downlink , 2004, IEEE Communications Magazine.

[75]  Kannan Ramchandran,et al.  Duality between source coding and channel coding and its extension to the side information case , 2003, IEEE Trans. Inf. Theory.

[76]  P. Hall On the rate of convergence of normal extremes , 1979 .

[77]  Henry D. Pfister,et al.  A Simple Proof of Maxwell Saturation for Coupled Scalar Recursions , 2013, IEEE Transactions on Information Theory.

[78]  R. Blahut Algebraic Codes for Data Transmission , 2002 .

[79]  Max H. M. Costa,et al.  Writing on dirty paper , 1983, IEEE Trans. Inf. Theory.

[80]  Ramji Venkataramanan,et al.  Techniques for Improving the Finite Length Performance of Sparse Superposition Codes , 2017, IEEE Transactions on Communications.

[81]  Antony Joseph,et al.  Achieving Information-Theoretic Limits with High-Dimensional Regression , 2012 .

[82]  Ioannis Kontoyiannis,et al.  An implementable lossy version of the Lempel-Ziv algorithm - Part I: Optimality for memoryless sources , 1999, IEEE Trans. Inf. Theory.

[83]  Sanghee Cho,et al.  APPROXIMATE ITERATIVE BAYES OPTIMAL ESTIMATES FOR HIGH-RATE SPARSE SUPERPOSITION CODES , 2013 .

[84]  Alain Glavieux,et al.  Reflections on the Prize Paper : "Near optimum error-correcting coding and decoding: turbo codes" , 1998 .

[85]  Giuseppe Caire,et al.  Bit-Interleaved Coded Modulation , 2008, Found. Trends Commun. Inf. Theory.

[86]  Lenka Zdeborová,et al.  The condensation transition in random hypergraph 2-coloring , 2011, SODA.

[87]  Sekhar Tatikonda,et al.  Sparse regression codes for multi-terminal source and channel coding , 2012, 2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[88]  Andrew R. Barron,et al.  High-rate sparse superposition codes with iteratively optimal estimates , 2012, 2012 IEEE International Symposium on Information Theory Proceedings.

[89]  Stephan ten Brink,et al.  Design of low-density parity-check codes for modulation and detection , 2004, IEEE Transactions on Communications.

[90]  Kamil Sh. Zigangirov,et al.  Time-varying periodic convolutional codes with low-density parity-check matrix , 1999, IEEE Trans. Inf. Theory.

[91]  Ramji Venkataramanan,et al.  Capacity-achieving sparse regression codes via spatial coupling , 2018, 2018 IEEE Information Theory Workshop (ITW).

[92]  Zixiang Xiong,et al.  Near-Capacity Dirty-Paper Code Design: A Source-Channel Coding Approach , 2009, IEEE Transactions on Information Theory.

[93]  Vincent Y. F. Tan,et al.  Polar Codes , 2016 .

[94]  Tsachy Weissman,et al.  Rate-distortion in near-linear time , 2008, 2008 IEEE International Symposium on Information Theory.

[95]  David Tse,et al.  Fundamentals of Wireless Communication , 2005 .

[96]  Giuseppe Caire,et al.  SPARCs for Unsourced Random Access , 2019, ArXiv.

[97]  Florent Krzakala,et al.  Replica analysis and approximate message passing decoder for superposition codes , 2014, 2014 IEEE International Symposium on Information Theory.

[98]  Andrea Montanari,et al.  Graphical Models Concepts in Compressed Sensing , 2010, Compressed Sensing.

[99]  R. Zamir,et al.  Lattice Coding for Signals and Networks: A Structured Coding Approach to Quantization, Modulation and Multiuser Information Theory , 2014 .

[100]  Gregory W. Wornell,et al.  Quantization index modulation: A class of provably good methods for digital watermarking and information embedding , 2001, IEEE Trans. Inf. Theory.

[101]  Ramji Venkataramanan,et al.  Finite-sample analysis of Approximate Message Passing , 2016, 2016 IEEE International Symposium on Information Theory (ISIT).

[102]  Herbert Gish,et al.  Asymptotically efficient quantizing , 1968, IEEE Trans. Inf. Theory.

[103]  Charles Soussen,et al.  Relaxed Recovery Conditions for OMP/OLS by Exploiting Both Coherence and Decay , 2014, IEEE Transactions on Information Theory.

[104]  Florent Krzakala,et al.  Approximate message-passing with spatially coupled structured operators, with applications to compressed sensing and sparse superposition codes , 2013, 1312.1740.

[105]  Jun'ichi Takeuchi,et al.  Least Squares Superposition Codes With Bernoulli Dictionary are Still Reliable at Rates up to Capacity , 2013, IEEE Transactions on Information Theory.

[106]  Stephan ten Brink,et al.  Extrinsic information transfer functions: model and erasure channel properties , 2004, IEEE Transactions on Information Theory.

[107]  Sundeep Rangan,et al.  Generalized approximate message passing for estimation with random linear mixing , 2010, 2011 IEEE International Symposium on Information Theory Proceedings.

[108]  William Equitz,et al.  Successive refinement of information , 1991, IEEE Trans. Inf. Theory.

[109]  Adel Javanmard,et al.  Information-Theoretically Optimal Compressed Sensing via Spatial Coupling and Approximate Message Passing , 2011, IEEE Transactions on Information Theory.

[110]  Jon Hamkins,et al.  Gaussian source coding with spherical codes , 2002, IEEE Trans. Inf. Theory.

[111]  Kamiar Rahnama Rad,et al.  Sparse superposition codes for Gaussian vector quantization , 2010, 2010 IEEE Information Theory Workshop on Information Theory (ITW 2010, Cairo).

[112]  Rui Zhang,et al.  Design of optimal quantizers for distributed source coding , 2003, Data Compression Conference, 2003. Proceedings. DCC 2003.

[113]  Emmanuel Abbe,et al.  Polar coding schemes for the AWGN channel , 2011, 2011 IEEE International Symposium on Information Theory Proceedings.

[114]  Rüdiger L. Urbanke,et al.  Modern Coding Theory , 2008 .

[115]  R. R. Bahadur,et al.  On Deviations of the Sample Mean , 1960 .

[116]  Andrea Montanari,et al.  The dynamics of message passing on dense graphs, with applications to compressed sensing , 2010, 2010 IEEE International Symposium on Information Theory.

[117]  Simon Litsyn,et al.  Lattices which are good for (almost) everything , 2005, IEEE Transactions on Information Theory.

[118]  Shlomo Shamai,et al.  Capacity and lattice strategies for canceling known interference , 2005, IEEE Transactions on Information Theory.

[119]  Henry D. Pfister,et al.  The effect of spatial coupling on compressive sensing , 2010, 2010 48th Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[120]  Pierre Moulin,et al.  Data-Hiding Codes , 2005, Proceedings of the IEEE.

[121]  John L. Shanks,et al.  Computation of the Fast Walsh-Fourier Transform , 1969, IEEE Transactions on Computers.