Least squares superposition codes of moderate dictionary size, reliable at rates up to capacity
暂无分享,去创建一个
[1] Sundeep Rangan,et al. On-Off Random Access Channels: A Compressed Sensing Framework , 2009, ArXiv.
[2] Emmanuel Abbe,et al. Polar coding schemes for the AWGN channel , 2011, 2011 IEEE International Symposium on Information Theory Proceedings.
[3] Emmanuel J. Candès,et al. Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies? , 2004, IEEE Transactions on Information Theory.
[4] Y. Benjamini,et al. Controlling the false discovery rate: a practical and powerful approach to multiple testing , 1995 .
[5] Robert G. Gallager,et al. Low-density parity-check codes , 1962, IRE Trans. Inf. Theory.
[6] N. J. A. Sloane,et al. Sphere Packings, Lattices and Groups , 1987, Grundlehren der mathematischen Wissenschaften.
[7] Michael Elad,et al. Stable recovery of sparse overcomplete representations in the presence of noise , 2006, IEEE Transactions on Information Theory.
[8] Jung-Fu Cheng,et al. Turbo Decoding as an Instance of Pearl's "Belief Propagation" Algorithm , 1998, IEEE J. Sel. Areas Commun..
[9] Erdal Arikan,et al. Channel Polarization: A Method for Constructing Capacity-Achieving Codes for Symmetric Binary-Input Memoryless Channels , 2008, IEEE Transactions on Information Theory.
[10] Sundeep Rangan,et al. Necessary and Sufficient Conditions for Sparsity Pattern Recovery , 2008, IEEE Transactions on Information Theory.
[11] Kamiar Rahnama Rad,et al. Sparse superposition codes for Gaussian vector quantization , 2010, 2010 IEEE Information Theory Workshop on Information Theory (ITW 2010, Cairo).
[12] Stanislaw J. Szarek,et al. Condition numbers of random matrices , 1991, J. Complex..
[13] Jean-Jacques Fuchs,et al. On sparse representations in arbitrary redundant bases , 2004, IEEE Transactions on Information Theory.
[14] David L. Donoho,et al. Exponential Bounds Implying Construction of Compressed Sensing Matrices, Error-Correcting Codes, and Neighborly Polytopes by Random Sampling , 2010, IEEE Transactions on Information Theory.
[15] J. Tropp. Norms of Random Submatrices and Sparse Approximation , 2008 .
[16] Andrew R. Barron,et al. Universal approximation bounds for superpositions of a sigmoidal function , 1993, IEEE Trans. Inf. Theory.
[17] Daniel J. Costello,et al. A distance spectrum interpretation of turbo codes , 1996, IEEE Trans. Inf. Theory.
[18] Joel A. Tropp,et al. Just relax: convex programming methods for identifying sparse signals in noise , 2006, IEEE Transactions on Information Theory.
[19] Shu Lin,et al. Error control coding : fundamentals and applications , 1983 .
[20] A. Barron,et al. Approximation and learning by greedy algorithms , 2008, 0803.1718.
[21] Sang Joon Kim,et al. A Mathematical Theory of Communication , 2006 .
[22] Stéphane Mallat,et al. Matching pursuits with time-frequency dictionaries , 1993, IEEE Trans. Signal Process..
[23] Xiaoming Huo,et al. Uncertainty principles and ideal atomic decomposition , 2001, IEEE Trans. Inf. Theory.
[24] I. Daubechies,et al. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint , 2003, math/0307152.
[25] Cong Huang. Risk of penalized least squares, greedy selection andl 1-penalization for flexible function libraries , 2008 .
[26] D. A. Bell,et al. Information Theory and Reliable Communication , 1969 .
[27] Joel A. Tropp,et al. Applications of sparse approximation in communications , 2005, Proceedings. International Symposium on Information Theory, 2005. ISIT 2005..
[28] E. Candès,et al. Near-ideal model selection by ℓ1 minimization , 2008, 0801.0345.
[29] Shu Lin,et al. Error Control Coding , 2004 .
[30] Rüdiger L. Urbanke,et al. Design of capacity-approaching irregular low-density parity-check codes , 2001, IEEE Trans. Inf. Theory.
[31] D. Donoho. For most large underdetermined systems of equations, the minimal 𝓁1‐norm near‐solution approximates the sparsest near‐solution , 2006 .
[32] V. Saligrama,et al. Information theoretic bounds to sensing capacity of sensor networks under fixed SNR , 2007, 2007 IEEE Information Theory Workshop.
[33] Peng Zhao,et al. On Model Selection Consistency of Lasso , 2006, J. Mach. Learn. Res..
[34] Joel A. Tropp,et al. Greed is good: algorithmic results for sparse approximation , 2004, IEEE Transactions on Information Theory.
[35] Daniel A. Spielman,et al. Efficient erasure correcting codes , 2001, IEEE Trans. Inf. Theory.
[36] Emre Telatar,et al. On the rate of channel polarization , 2008, 2009 IEEE International Symposium on Information Theory.
[37] Rüdiger L. Urbanke,et al. The capacity of low-density parity-check codes under message-passing decoding , 2001, IEEE Trans. Inf. Theory.
[38] Andrew R. Barron,et al. Toward fast reliable communication at rates near capacity with Gaussian noise , 2010, 2010 IEEE International Symposium on Information Theory.
[39] A. Sridharan. Broadcast Channels , 2022 .
[40] Thomas M. Cover,et al. Elements of Information Theory , 2005 .
[41] J. Tropp. On the conditioning of random subdictionaries , 2008 .
[42] F. MacWilliams,et al. The Theory of Error-Correcting Codes , 1977 .
[43] Y. C. Pati,et al. Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition , 1993, Proceedings of 27th Asilomar Conference on Signals, Systems and Computers.
[44] Jean-Jacques Fuchs,et al. Recovery of exact sparse representations in the presence of noise , 2004, 2004 IEEE International Conference on Acoustics, Speech, and Signal Processing.
[45] Robert M. Gray,et al. Coding for noisy channels , 2011 .
[46] Daniel A. Spielman,et al. Improved low-density parity-check codes using irregular graphs and belief propagation , 1998, Proceedings. 1998 IEEE International Symposium on Information Theory (Cat. No.98CH36252).
[47] H. Vincent Poor,et al. Channel Coding Rate in the Finite Blocklength Regime , 2010, IEEE Transactions on Information Theory.
[48] A. Barron,et al. Sparse Superposition Codes: Fast and Reliable at Rates Approaching Capacity with Gaussian Noise , 2010 .
[49] Martin J. Wainwright,et al. Information-theoretic limits on sparsity recovery in the high-dimensional and noisy setting , 2009, IEEE Trans. Inf. Theory.
[50] G. David Forney,et al. Concatenated codes , 2009, Scholarpedia.
[51] Tong Zhang,et al. Adaptive Forward-Backward Greedy Algorithm for Learning Sparse Representations , 2011, IEEE Transactions on Information Theory.
[52] Pedram Pad,et al. Capacity Achieving Random Sparse Linear Codes , 2011, ArXiv.
[53] Michael Lentmaier,et al. Joint Permutor Analysis and Design for Multiple Turbo Codes , 2006, IEEE Transactions on Information Theory.
[54] G. David Forney,et al. Modulation and Coding for Linear Gaussian Channels , 1998, IEEE Trans. Inf. Theory.
[55] Emmanuel J. Cand. Near-ideal model selection by '1 minimization , 2008 .
[56] Galen Reeves,et al. Sampling bounds for sparse support recovery in the presence of noise , 2008, 2008 IEEE International Symposium on Information Theory.
[57] D. Donoho. For most large underdetermined systems of linear equations the minimal 𝓁1‐norm solution is also the sparsest solution , 2006 .
[58] Daniel J. Costello,et al. Channel coding: The road to channel capacity , 2006, Proceedings of the IEEE.
[59] Vahid Tarokh,et al. Shannon-Theoretic Limits on Noisy Compressive Sampling , 2007, IEEE Transactions on Information Theory.
[60] R. Tibshirani. Regression Shrinkage and Selection via the Lasso , 1996 .
[61] Martin J. Wainwright,et al. Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$ -Constrained Quadratic Programming (Lasso) , 2009, IEEE Transactions on Information Theory.
[62] Tong Zhang,et al. On the Consistency of Feature Selection using Greedy Least Squares Regression , 2009, J. Mach. Learn. Res..
[63] A. Glavieux,et al. Near Shannon limit error-correcting coding and decoding: Turbo-codes. 1 , 1993, Proceedings of ICC '93 - IEEE International Conference on Communications.
[64] Peter L. Bartlett,et al. Efficient agnostic learning of neural networks with bounded fan-in , 1996, IEEE Trans. Inf. Theory.
[65] Vahid Tarokh,et al. Noisy compressive sampling limits in linear and sublinear regimes , 2008, 2008 42nd Annual Conference on Information Sciences and Systems.
[66] T. Blumensath,et al. Iterative Thresholding for Sparse Approximations , 2008 .
[67] L. Jones. A Simple Lemma on Greedy Approximation in Hilbert Space and Convergence Rates for Projection Pursuit Regression and Neural Network Training , 1992 .
[68] Alexander Barg,et al. Error Exponents of Expander Codes under Linear-Complexity Decoding , 2004, SIAM J. Discret. Math..
[69] Martin J. Wainwright,et al. Information-Theoretic Limits on Sparsity Recovery in the High-Dimensional and Noisy Setting , 2007, IEEE Transactions on Information Theory.
[70] Waheed U. Bajwa,et al. Multiuser detection in asynchronous on-off random access channels using lasso , 2010, 2010 48th Annual Allerton Conference on Communication, Control, and Computing (Allerton).
[71] Kannan Ramchandran,et al. Denoising by Sparse Approximation: Error Bounds Based on Rate-Distortion Theory , 2006, EURASIP J. Adv. Signal Process..
[72] Rüdiger L. Urbanke,et al. A rate-splitting approach to the Gaussian multiple-access channel , 1996, IEEE Trans. Inf. Theory.
[73] W. Hoeffding. Probability Inequalities for sums of Bounded Random Variables , 1963 .
[74] Deanna Needell,et al. CoSaMP: Iterative signal recovery from incomplete and inaccurate samples , 2008, ArXiv.
[75] Jian Cao,et al. Asymptotically Optimal Multiple-Access Communication Via Distributed Rate Splitting , 2006, IEEE Transactions on Information Theory.
[76] Emmanuel J. Candès,et al. Decoding by linear programming , 2005, IEEE Transactions on Information Theory.
[77] Martin J. Wainwright,et al. Information-Theoretic Limits on Sparse Signal Recovery: Dense versus Sparse Measurement Matrices , 2008, IEEE Transactions on Information Theory.
[78] Harrison H. Zhou,et al. False Discovery Rate Control With Groups , 2010, Journal of the American Statistical Association.
[79] O. Antoine,et al. Theory of Error-correcting Codes , 2022 .